
Module 5 Lesson 2: Split In Batches
Handle the many. Learn how to process thousands of items in small chunks to avoid hitting API rate limits or crashing your server's memory.
Module 5 Lesson 2: Split In Batches
If you have 1,000 users and you try to email them all at once, Gmail will block you. The Split In Batches node allows you to "Slow down" and process them in groups of 10 or 100.
1. How it Works
It is a Loop.
- It takes 1,000 items.
- It gives the next 10 items to your "Action" node.
- You do something (e.g., Send Email).
- You connect the end of your action back to the START of the Split node.
- It gives you the next 10.
- Once the list is empty, it follows the "Done" output.
2. Preventing Rate Limits
Most APIs allow "5 requests per second." If you have 1,000 items:
- Use Split In Batches (Size: 1).
- Add a Wait node (1 second).
- Connect back.
- Result: Your 1,000 items will take 1,000 seconds, and you will NEVER get a "429 Too Many Requests" error.
3. Saving Memory
Processing 10,000 items in one single JSON object can crash your n8n container. By splitting them, n8n only has to "Think" about 10 items at a time, keeping your RAM usage low and your server stable.
4. The "Check" Logic
The Split node has a special noItemsLeft boolean. You use an IF node at the end of your loop to check:
- "Are there more items?" -> Go back to Split.
- "Empty?" -> Finish the workflow.
Exercise: The Batch Relay
- Create a mock list of 20 items.
- Use a Split In Batches node with size 5.
- Add a node that logs the "Batch Number."
- Loop it back. How many times did the logger run?
- Why is it important to use Wait (Module 3) inside a loop?
- Research: What is the "Wait for all" setting in a Split node?
Summary
Split In Batches is the secret to Reliable Scale. By breaking a "Big task" into "Small tasks," you respect the rules of external APIs and ensure that your automation can handle any workload without crashing.
Next Lesson: Safety first: Error Triggers and Handling.