OmniChain

Efficient visual programming for AI language models

Screenshot

Use language models as focused processing components rather than overburdened pseudo-humans.

Guide language models along predefined logic of your own design, drastically improving performance and cost-efficiency.

Use other software and frameworks from OmniChain via HTTP requests and system commands. Use OmniChain from other software via its own HTTP API.

Access the underlying operating system to read/write files and run commands, or access external services via HTTP requests, turning your workflows into powerful integrations.

Build workflows that can loop and run 24/7, with the ability to pause for user input, and resume when ready.

Easily make custom nodes, use JS eval nodes to run custom code, and optionally have the AI generate the code for you.

Use the chain's self-modification capabilities to store and reuse data at any point in the chain, and export the chain with the data included.

Private, self-hosted, fully open-source, and available for commercial use via the non-restrictive MIT license.

Video Tutorial

Concept

Dealing with complexity

AI large language models (LLMs) are burdened with too much responsibility. They are expected to work on complex tasks, comprised of many smaller tasks, while having no real self-awareness. Larger models can be better at this, but they are also more expensive to run, and a lot of them are also not viable for self-hosting.

The problem with agents

Once nice solution to this issue has been to use "agents" - bots with their own role, using the LLM as a backend, working as members of a team, in order to break up the task into smaller tasks and have each agent only worry about a specific part. Sort of like a company.

This does lead to drastic performance improvements (look at Mixture of Agents - pure magic), but what about cost and efficiency? Not so much. When you're a business, you usually have a specific task and a specific process in mind for your software and workflows. You need focus and efficiency. And each token used by the LLM costs money, even on your own hardware.

An efficient solution - LLM processing components

OmniChain takes a simpler approach. Instead of treating the LLM as a pseudo-human, you use it as a processing component inside a visual programming environment. The point is to focus all the power of the LLM on very specific tasks, inside a predefined process, automating only what actually needs to be automated. That's what a business usually needs - to get a specific task done, and to automate the execution.

Integration

In order to avoid useless reinventing of the wheel, OmniChain focuses on being a clean portable visual controller for gluing together other existing frameworks and software. What this means, in short, is that you can use other software from inside OmniChain, and you can use OmniChain from inside other software (via its API). So, you can use OmniChain to both utilize and augment lots of existing awesome frameworks, like MetaGPT, CrewAI, LangChain, etc.

Extensibility

There will always be something non-standard that a certain business or individual needs in their workflows, so OmniChain is built to be freely extensible - all you need to create a custom node is to drop a JavaScript file into your custom_nodes folder. If you want a simpler solution for custom code - you can also use the JavaScript eval nodes to either let the model run code it generates, or to run any code you write yourself.

Memory capabilities

Last but not least, OmniChain's execution engine is built so that specific nodes can rewrite their content in real time, and have the chain utilize the stored data. The data stays on the chain itself, so you can make chains that essentially reconfigure themselves during runtime, and then export the resulting chain for other people in the community to use.