Looping in Mosaic Workflows

Looping is useful when you want to process multiple items or perform an action repeatedly, such as sending a message to every contact in your address book. Mosaic Workflows handles this repetitive processing automatically, meaning you don't need to specifically build loops into your workflows. Note that looping is not suitable for all nodes, see here the excluded nodes.

Using loops in Mosaic Workflows

Mosaic Workflows' nodes are designed to handle multiple items as input, process these items, and produce corresponding outputs. Each item can be viewed as an individual data point or a single row in the node's output table.

Typically, nodes operate on each item individually. For instance, if you aim to bulk send the names and notes of customers from the Customer Datastore node as messages on Slack, you would:

  1. Connect the Slack node to the Customer Datastore node.
  2. Configure the parameters.
  3. Execute the node.

You would receive multiple messages, one for each item. This is how you can process multiple items without having to explicitly connect nodes in a loop.

Executing nodes once

For situations where you don't want a node to process all received items, for example sending a Slack message only to the first customer, you can do so by toggling the Execute Once parameter in the Settings tab of that node This setting is helpful when the incoming data contains multiple items and you want to only process the first one.

Creating loops

Mosaic Workflows typically handles the iteration for all incoming items. However, there are certain scenarios where you will have to create a loop to iterate through all items. Refer to Node exceptions for a list of nodes that don't automatically iterate over all incoming items.

Loop until a condition is met

To create a loop in a workflow, connect the output of one node to the input of a previous node. Add an IF node to set comparison operations that control when the loop should stop.

Loop until all items are processed

Use the Loop Over Items node for looping until all items are processed. To process each item individually, set Batch Size to 1.

You can batch the data in groups and process these batches. This approach is useful for avoiding API rate limits when processing large incoming data, or when you want to process a specific group of returned items.

The Loop Over Items node stops executing after all the incoming items get divided into batches and passed on to the next node in the workflow, so it's not necessary to add an IF node to stop the loop.

Node exceptions

Nodes and operations where you need to design a loop into your workflow:

  • Airtable :
    • List: this operation executes once, not for each incoming item.
  • Coda :
    • Get All: for the Table and View resources, this operation executes once.
  • CrateDB node will execute and iterate over all incoming items only for Postgres related functions (for example, pgInsert , pgUpdate , pqQuery ).
  • Google Cloud Firestore :
    • Get All: for the Collection and Document resources, this operation executes only once.
  • Google Drive :
    • List: this operation executes only once, not for each incoming item.
  • Google Sheets :
    • Read: this operation executes only once for the Sheet resource.
    • Update: this operation updates multiple rows if they're in the same range. It doesn't iterate through additional ranges.
  • HTTP Request : you must handle pagination yourself. If your API call returns paginated results you must create a loop to fetch one page at a time.
  • Iterable handles list operations in a single request, using the list ID defined for the first item. To address different lists in a single execution, you must create a loop with a batch size of 1.
  • Microsoft SQL doesn't natively handle looping, so if you want the node to process all incoming items you must create a loop.
  • MongoDB executes Find once, regardless of the number of incoming items.
  • Postgres node will execute and iterate over all incoming items only for Postgres related functions (for example, pgInsert , pgUpdate , pqQuery ).
  • QuestDB node will execute and iterate over all incoming items only for Postgres related functions (for example, pgInsert , pgUpdate , pqQuery ).
  • Read/Write File From Disk node will fetch the files from the specified path only once. This node doesn't execute multiple times based on the incoming data. However, if the path is referenced from the incoming data, the node will fetch the files for all the valid paths.
  • Redis :
    • Info: this operation executes only once, regardless of the number of items in the incoming data.
  • RSS nodes executes only once regardless of the number of items in the incoming data.
  • TimescaleDB node will execute and iterate over all incoming items only for Postgres related functions (for example, pgInsert , pgUpdate , pqQuery ).