Technology

Generative AI could transform the way we interact with enterprise software


It’s potentially the biggest shift since point-and-click

Over the last several months, OpenAI, and ChatGPT in particular, has shown what’s possible with a user interface built on top of a large language model that can answer questions and create code or pictures. While that alone is remarkable, we can also interact with and adjust the byproduct by having a conversation of sorts with the AI. It’s amazing really, but think about how transformative this could be by applying it to the enterprise applications you use on a daily basis.

What if you could build an interface on top of your existing applications, so that instead of pointing and clicking, you could simply ask the computer to do a task for you and it would do it, based on the applications’ underlying model or your company’s internal language model.

That would be a huge leap forward in computing. Before now, the biggest leap happened in 1984, when Apple introduced the graphical user interface that began a slow shift from the command line approach and eventually went mainstream in the early ’90s with the release of Windows 3.1 and later Windows 95.

We’ve had other UX attempts, such as voice interfaces like Siri and Alexa, and while they brought some changes to the consumer side of things, they’re still not exactly the same thing as a computer producing work for us. It’s just finding some answers and in some cases executing simple commands.

It certainly didn’t change the way we work, and that is the true measure of whether a new computing approach is truly transformational. If you could simply type an action like “Help me onboard a new employee” or “Generate a monthly P&L statement” instead of explicitly guiding the systems on what to do, that would be a fundamental leap forward in UX design.

That’s what generative AI has the potential to do, but like anything else, it’s going to require some creativity to design these new interfaces in an elegant way, so it doesn’t feel like it’s bolted on to your old point-and-click interface. It’s also probably going to require more-focused large language models.



Source link