Is every layer in the ACE framework a LLM? #149
Replies: 2 comments 1 reply
-
Ping! I started reading the paper and I'm hoping it will make things clearer, but I would also appreciate if someone could offer additional insights. |
Beta Was this translation helpful? Give feedback.
-
As one of the primary authors of the original hello-layers demo, let me add a few pieces here:
I'm pointing these things out because I believe what ACE really needs is people thinking creatively about how to implement the layers of intelligence and the inter-layer communication, and not be too hung up on the work that's already been done. A layer might best have one model, or many. It might have elements of agentic behavior itself, or not. Messages on the bus may be best restricted to adjacent layers, or not. As it stands, these are unanswered questions, and I would love to see people trying novel approaches and reporting back on their successes and failures! |
Beta Was this translation helpful? Give feedback.
-
Hi, I'm delving into the ACE framework and have watched the introduction video, but I still have a couple of points that are unclear.
Firstly, in the ACE framework, is each layer functioning as a separate session with a Large Language Model (LLM), or does the LLM play a part across the entire framework? To illustrate, I'm wondering if there's more to each layer than just 'LLM Input' and 'Bus Output'. Could you provide some examples?
Secondly, regarding the memory capabilities of the agent model in ACE, how is consistent mission adherence ensured? For instance, how does the framework prevent an agent from deviating from its assigned mission over time?
I'm planning to read the paper for a deeper understanding, but any insights or explanations would be greatly appreciated.
Beta Was this translation helpful? Give feedback.
All reactions