Blog

Latest AI Trends: Large Context Windows, Hyper-Personalization

Written by Maciej Michałek | Oct 3, 2024 12:50:33 PM

An Interview With Maciej Michałek, AI Advisor and Principal Consultant for the Data Science & AI Practice of Lingaro Group


Lingaro: The past trend has been to use LLMs to power search engines for internal knowledge libraries and help generate summaries and reports for managers. What does being able to parse hours-long transcripts mean for businesses now?

MM: In the corporate world, the new large multimodal models (LMMs) create completely new possibilities for information management. Every organization somewhere accumulates knowledge, so if a large company doesn’t utilize LMMs yet, they should start catching up as soon as possible.

In everyday work, companies have meetings. During these meetings, companies accumulate knowledge, but a lot of this knowledge is not collected in the right way because no one writes it down. 

This means that only the ones in the meetings are privy to the most recent knowledge. Often, they pass down what they know orally, but tend to omit or forget details along the way. Let’s say a meeting was about a preventive maintenance app. If a new developer comes and asks how we should proceed with the project, he will not get a hint from a document that was potentially prepared five years ago and has become outdated. However, he’ll get an answer from our meeting, since we talked about it and made some decisions on what to do.

By having full transcripts as part of the knowledge database, a chatbot can relay the most recent and relevant information to even the newest of newcomers. Instead of knowledge being lost through oral tradition, knowledge is retained — including the context in which it was created.

Lingaro: Natural language processors have become more powerful than ever before. In developing and implementing them, what have been the speedbumps along the way?

MM: Currently, companies are commonly aggregating their knowledge so that the greatest number of people in the organization can make use of their data. We can then use existing documents to validate new documents, create training materials, and prepare exams to test people on the knowledge contained in their knowledge database.

However, the biggest blocker to doing all of that is there are a lot of inconsistencies in the documents. Emails could have one set of details, while meetings could have another set, so keeping things straight is additional work. Another blocker is confidentiality. Public knowledge must be separated from protected information so that the latter doesn’t leak out.

Information security is why generative AI technology is so expensive to implement. Essentially, you want a system that can listen to everything and serve knowledge well by keeping the facts straight, prioritizing what’s relevant, and keeping private things private.

Lingaro: So, we’re seeing a trend where companies are aggregating their knowledge, including new types of sources like meeting transcripts, and facing challenges like keeping facts straight and protecting sensitive information.  What are enterprises going to do with all of their knowledge?

MM: Right now, we’re able to significantly accelerate the adaptation of knowledge. We improve the pace of employee training and adaptation to change. We created a process where we refresh documents like employee handbooks, training guides, and user manuals — documents used by thousands of people.

However, employees come to a point wherein we don’t want to parse through a 30-page document again. Thanks to AI, we can be smart about updates. Specifically, the AI can take note of a user’s exposure to a version of a document, then create a tailor-made document conveying only the updates made to the document so that the user practically has the latest version of the document.

We can go a step further and tap traffic statistics on individual sites. To illustrate, we can know if a person’s last contact with some regulations was two years ago. Since that time, there was a series of changes. Now, AI can prepare a dedicated Stream for that person so that they’ll become up to date. More precisely, AI can consider the circumstances of individuals and provide them with a platter of updates that’s tailor-made for them. 

This is called hyper-personalization. AI can provide streams of information for specific professions, roles, points of contact, et cetera. All companies should be interested in this because…

Lingaro: …it makes information easier to consume?

MM: Yes, precisely. We can make information easy to digest, easy to control, and easy to test for absorption. That is, the AI could generate quick quizzes to see if the receiver correctly understood the updates provided to them. All of these could be done in a fully automated way.

Lingaro: So far, we’ve only been talking about internal processes. Can these new LMMs be used to communicate with customers?

MM: Companies could and definitely should use this new technology. Consumers use all sorts of channels to engage with companies. They’ll place orders over the phone and submit complaints over email, for example. Armed with an accurate record of all of these interactions, an AI can address a customer’s needs in an automated fashion. And since AI has become more difficult to distinguish from real humans, people are less prone to reject it and seek a human customer service representative. Purely from a quality-of-service standpoint, it has been projected that interactions wouldn’t necessitate the involvement of a human for 80% of the cases. This means that call center costs could drastically decrease.

Lingaro: The savings would offset the initial costs of investing in AI?

MM: I believe that the savings it would generate would be enormous. It would do more than just offset costs, especially since AI opens up a number of opportunities for businesses. If you’re having to deal with a thousand complaints every day, it’d be less costly to have a single AI program summarize and draw out the most important information instead of a team of humans.

And I’ll go back to hyper-personalization again. It’ll take too much time and effort for a person to consider a customer’s entire history, what more the histories of hundreds of customers. This is not a problem with AI.

Beyond hyper-personalization, you have consumer insights, obviously. Companies are drowning in customer communications, but AI can draw out trends, such as a rising demand for a new flavor, a larger size, more distribution for particular locations. With AI, consumer insights can finally stop disappearing and start being converted into money, into a stream of product information.

Lingaro: Are the models limited to text files? What other types of information could the models use and generate?

MM: The mechanisms that retrieve textual information also work well with images. This is called multimodality, and other AI solutions can simultaneously process audio and video inputs as well. More and more GenAI produce multimodal output as well, such as documents that have images in them, and images that have text embedded in them. Currently, we have models that can learn branding and produce marketing assets that accurately portray that particular branding. Maybe they’ll need to be tweaked by a graphic designer, but this feedback loop serves to improve output even more.

Lingaro: That must sound incredible for marketers once GenAI can actually produce eye-catching collaterals with accurate branding. What’s limiting or stopping the developments of AI solutions like this?

MM: The pace of development is simply incredible. What is mainly blocking the capabilities of systems at the moment is computing power. But I’m fairly certain that we’ll be able to sufficiently address this soon. Heck, I won’t be surprised if AI ends up the one to solve its own problem.

However, that brings us to a conundrum: How much power should we let AI wield? Tech companies have been using AI to design and create processors and graphics chips to enhance its own speed and intelligence. We also have AI that help programmers write code, so it’s not a farfetched idea that someone would prompt an AI program to create a version of itself that iteratively improves its own code.

In short, another limiting factor for AI development is our ethics. To illustrate, it’s possible for people to use GenAI to disseminate large-scale disinformation. It could also be used to create a multitude of viruses that it could then make harder-to-thwart versions of later on. However, it could also be used to develop life-saving medicines faster and make supply chains more efficient and sustainable. Our ethics will define the future trends of AI.