Building AI agents with Semantic Kernel

By Simon Bisson

Back in the early 1990s, I worked in a large telecoms research lab, as part of the Advanced Local Loop group. Our problem domain was the “last mile”—getting services to peoples’ homes. One of my research areas involved thinking about what might happen when the network shift from analog to digital services was complete.

I spent a great deal of time in the lab’s library, contemplating what computing would look like in a future of universal bandwidth. One of the concepts that fascinated me was ubiquitous computing, where computers disappear into the background and software agents become our proxies, interacting with network services on our behalf. That idea inspired work at Apple, IBM, General Magic, and many other companies.

One of the pioneers of the software agent concept was MIT professor Pattie Maes. Her work crossed the boundaries between networking, programming, and artificial intelligence, and focused on two related ideas: intelligent agents and autonomous agents. These were adaptive programs that could find and extract information for users and change their behavior while doing so.

It has taken the software industry more than 30 years to catch up with that pioneering research, but with a mix of transformer-based large language models (LLMs) and adaptive orchestration pipelines, we’re finally able to start delivering on those ambitious original ideas.

Related Content