Sometimes, you can get a lot done if you group like tasks together and then do all of one group in one fell swoop.
For example, for a given work project, I might need to do research on seven or eight issues, each dealing with a separate part of the project. If so, I will tend to spend one day doing research for all issues, quickly saving each relevant item/article/etc. that I find to my desktop without worrying about labeling the material or organizing it by issue.
When I do this, I know that after several hours of research, I am likely to have obtained good information on each issue, even though it’s unorganized. Then, at a later time, I will label and organize all of the found research. In this way, I have researched and organized information in two “batches,” rather than both researching and organizing information at once, let alone one issue at a time.
As another example, in writing this blog, I quickly drafted a dozen entries in just a few hours, not worrying about anything except getting the main ideas across. In doing so, I knew that I would later do a “batch” edit where I focused on adding examples and/or improving the wording or textual layout.
Like pipelining (discussed in another post), batch processing increases output in terms of throughput (how much gets done in a given period of time). I also think that batch processing also increases overall output because with each batch, you tend to “get on a roll” and accomplish more (creatively) than if you divided the effort into multiple sessions.