Thoughts from a recently concluded AI conference at UC Berkeley

While the conference was a comprehensive mirror to the schism in AI discourse today. I walked back with more questions than answers.

Thoughts from a recently concluded AI conference at UC Berkeley

About three weeks back I attended UC Berkeley's "Interface to Agency" a single day design conference on Agentic AI and ontological design.

All in all it was an extremely balanced conference that did a decent of job of presenting a myriad of view points on the state of artificial intelligence.

But I came out of the conference with three key insights.

There's a lot of gate-keeping by obfuscation right now

What's the biggest indicator of an insecure industry? Jargon. Where the notion of "This is too complicated for you to understand, let the experts handle it" is prevalent, you know it's an insecure industry that's trying to gate-keep others from joining because they fear that the knowledge they claim to possess would not be considered special anymore, or the money they're minting right now will start to get diluted. And that's exactly how it felt like going through multiple of the talks on Agentic AI that day.

"The job of a human in software design is going to be that of an orchestrator or manager of Agents, that they employ / deploy to get things done for them"

A common theme that I observed during the first half of the conference was this notion that the computer will do work for me, and while it's a noble idea, it's hardly new, or hardly as revolutionary as AI optimists would have you believe. It's just wrapped up in the jargon of Agentic AI.

Truth be told, Agentic AI isn't new. It's just a fancy way of saying "we figured vanilla ChatGPT is a stupid tool for answering your questions so we decided that it needs to hand over performing tasks to different actions that can reliably give correct answers". If Hallucinations are just glorified bugs, Agents are glorified computer programs. For some reason AI people love giving human attributes to a non-living thing.

What's complicated about this? Nothing. It's just that the terminology itself is alienating.

Coming to the other half that of the computer doing work for you. It's literally trying to live the dream of countless automation and voice assistant apps, from Automator, to Shortcuts, to Siri. This idea that I press a button and the computer just does things for me and I don't need to do anything.

Guess what? It's great but only for small, repeatable actions that you don't want control over and it's not a lot of things.

Even socially, we don't want others doing things for us as much as we want to do things together.

If my brother orders a burrito for me by guessing I am hungry, and would need a Burrito, while it may flatter me, there's a high likelihood that I may not want a Burrito at that time, or from that place, or just want a different Burrito from the one he thinks I want. He can't get into my brain and figure these things out, he needs to ask me, what I want. "Interaction" is more important than "Action".

Automator, Shortcuts, IFTTT, aren't the next computing revolution not because the technology is not there yet, it's because the tasks that can be automatically done by the computer for us, are very few and appeal to specific people.

AI has produced the biggest schism in technological discourse.

Bruce Sterling

It's not like the conference didn't put those view points up, it did and to a very good extent. If the first half of the conference made me feel out-intellected, the second half had talks that focused on interaction design, home-grown AI models, and a rollicking roast of the technology by Bruce Sterling.

While the second half was more enjoyable to me, I couldn't help but notice the schism. On one hand we have people who practically live and breathe this "agentic" world and would have you believe that it's the future of computing, on the other hand there are people who want to push back, who don't want AI impacting human creativity, who look down upon it even.

That dichotomy is jarring. In fact even more so, when you see such discordant views amongst people working on the tech itself. To me it's a clear lack of technological and design leadership.

We have the tech, it's controlled by big tech, big tech is controlled by systems of scale and growth, which the initially promised technology is failing to fulfill, and that pushes execs to make pompous, grandiose claims that are far from reality; and that pushes this hyper optimistic buzzword laden narrative of AI being all powerful.

People who are skeptics of big tech sometimes see through the bullshit, sometimes people buy into the fear, uncertainty and doubt that execs are creating because the new tech is intimidating, a lot of people have genuine ethical concerns about it, and some people don't buy into the marketing and see the utility of the technology differently.

But such is the juggernaut of AI growth, that these people are pushed to the other side of the spectrum, creating little room for rational discourse, and only screeching voices that vacillate between  "AI is the future of humanity" and "AI is the worst, most unethical thing  in tech" are left in the room.

Nobody gets AI yet. But everyone wants to have an opinion on it. A vision rooted in understanding of the technology and its ethical implications would've done wonders to the AI discourse. Sadly, that sort of leadership doesn't exist anymore.

There's no transparency in AI.

Every breakthrough technology, gave people the freedom to tinker with it, to look under the hood, shape it, mold it into their own, and learn about its workings by making.

Computers started with hackers tinkering and in computer clubs, they were accessible electronics that people could get into and understand it as much as they could build on top of them. In the early days of web, you could go to a website, view source and understand how things worked, the code was open to be understood.

This is true for computers, but also true for ancient tech like wood, fire, paper, metal.

But with AI, it's mostly only accessible to build on top of the tech developed by tech conglomerates like OpenAI, Anthropic, or Claude. There's no easy way for people to dig into it, understand the workings and tinker with an LLM's inner workings.

While tools to fine tune LLMs exist, they are far less accessible to the common person than the tools to tinker with a computer were in the 1980s. Partly because of the complexities involved in "fine tuning an LLM" and partly because the weights and the reasoning are the secret sauce for these things. And even if you fine tune an LLM, you cannot really get into the weights of the training or tinker with how its reasoning works, so the hobbyists working with AI always have a second hand understanding of the tech.

It's also a technology uniquely controlled by big tech. Every other technological revolution was more open.  


These are lots of words to say that while I appreciate the varied AI discourse presented by the conference, I walked back with more questions than answers.

What are we really building and how is it helping people is a question I walked in with, and walked out with unanswered.