How Enterprises Can Utilize a Smarter, More Responsible AI

AI and ML trends 2024
AI and ML trends 2024

Generative AI is 2023’s newsmaker. But as is often the case with newfangled, enigmatic technology that promises to change the world, there's been a spate of concern about its impact. Large language models that power the likes of ChatGPT will grow and be more humanlike, while generative AI will be more multimodal. As generative AI becomes more ubiquitous, technology and data professionals who want to tap into its potential will need to be smarter and more responsible with it.

In the second part of our series on technology and analytics trends for 2024, experts at Lingaro's data science and AI (DS&AI) practice weigh in on the more technical aspects that are shaping AI and generative AI’s impact now and tomorrow.

 

Pushing the boundaries of large language models

Among the many transformative trends on AI’s horizon, using large language models (LLMs) and generative AI to automate business processes and create dialogue interfaces with data stands particularly tall.

Lingaro Senior Data Scientist and AI Advisor Maciej Michalek cites Google’s Gemini LLM, which is potentially five times more powerful than GPT-4, as a case in point. Google’s strides in developing its dedicated processors for AI (such as the TPUv5 architecture) also promise economical yet more potent computing power that will further enable AI to permeate across the enterprise. Global tech giants, such as OpenAI, Alphabet, Meta, Microsoft, and NVIDIA are also doubling down on their AI investments. Concerns about computing power will linger, but the advent of quantum computing, which can help address them, also looms large.

Beyond the surface-level interactions of ChatGPT, Michalek said that LLMs will be crucial to unlocking efficiencies across sectors when integrated with reliable data sources. Michalek remarked that skepticism about their data sources and reasoning mechanisms held back their expansive use, but as transparency increases and understanding deepens, LLMs will be ushered into mainstream applications.

Though not necessarily a bad thing, regulations like those in the EU might pose temporary setbacks to the implementation of advanced AI initiatives. For IT, analytics, and data professionals keen on riding generative AI’s wave, a few imperatives emerge. Understanding AI’s potential and the associated costs is paramount. Leaders should know where to integrate AI into the company’s data strategy and existing solutions, which requires embracing a mindset of exploration and experimentation.

Organizations, for instance, need dedicated teams to explain, map, and track the breadth and depth of LLMs embedded as a capability in their AI-based systems. The Lingaro DS&AI practice currently works with consumer goods companies to define the scope, purpose, and functionality of their would-be AI engines and creating proofs of concept that align with specific use cases.

 

Tapping into multimodal generative AI

Along with LLMs, Lingaro DS&AI Practice AI Engineering Competency Leader Norbert Fijałek sees multimodal generative AI as a groundbreaker in 2024. Innovations that enable content to be generated across different modalities are already propelling this trend forward, Fijałek said.

GPT-4V, Large Language and Vision Assistant (LLaVA), Kosmos-2, and other open-source LMMs will create new paradigms of multimodality in the generative AI space. GPT-4V, for instance, extends the existing GPT-4 by adding vision capabilities (i.e., through integration with DALLE-3). It allows for text and image prompts, enabling users to specify any vision or language task. It can be enhanced with test-time techniques developed for text-only models, including few-shot and chain-of-thought prompting. The open-source LMM targeting GPT-4V capabilities is LLaVA, which integrates pretrained CLIP visual encoder and Vicuna LLM. It’s a multimodal model that processes text and image inputs and outputs for all-round visual and language comprehension. Kosmos-2 enables new capabilities of perceiving object descriptions (i.e., bounding boxes) and grounding text to the visual world.

By utilizing existing LLMs, video generation models are becoming better at enriching video narratives and enhancing existing videos. Advancements in deep learning and signal processing are giving text-to-speech and speech-to-text models a more humanlike voice and enabling voiceless data to listen and speak. In the same vein, sound generation models are replicating and orchestrating new sounds, be it in music or the nuances of human speech.

These innovations, stemming from AI’s ability to process data from multiple sources (text, images, videos, sounds), help in creating holistic, humanlike understanding. The Lingaro DS&AI practice, too, has already been working with CPG and FMCG companies in adapting and applying multimodal generative AI in different parts of their business. Fijałek also mentioned that their practice also helps enterprises by augmenting their existing applications with more intelligent capabilities. This need is not a surprise: By 2026, 30% of new apps are projected to use AI to drive personalized adaptive user interfaces, with data from transactions and external sources being infused in these apps to streamline how end users derive insights, predictions, and recommendations from them.

Success with any new technology comes with caveats, however. The challenge for leaders, Fijałek said, is how to balance innovation, practicality, and accountability and cascading them across the organization. While the likes of ChatGPT are helping in democratizing generative AI, enterprises that want capitalize on this technology should do their part, too. With generative AI evolving and upending paradigms at warp speed, it’s crucial to stay updated of its latest developments. Collaboration and partnerships also enable enterprises to access specialized knowledge and expertise as well as industry best practices that they can share with their own employees. In fact, more than 80% of enterprises are projected to use generative AI APIs, models, and apps in their production environments by 2026.

At the same time, however, enterprises need to invest heavily — creating and adopting new approaches, models, and techniques as well as upskilling their own workforce, to name a few — while complying with relevant regulations and ensuring security, privacy, and responsibility.

Indeed, generative AI is changing the way companies manage their knowledge and assets. To remain competitive, companies must now treat their data, including intellectual property such as brand assets, as a valuable resource that can be used by LLMs. By creating a unified metadata system, companies can ensure that their assets and documents are structured, organized, and easily accessible. Furthermore, companies need to revise and optimize their content for generative AI to effectively use it. This includes adapting the language, format, and structure of the content to align with AI compatibility, ultimately creating a more streamlined and efficient knowledge management system.

 

Using AI to innovate responsibly

The EU, the UK, Canada, and the US are already crafting or introducing their own AI regulation. Even technology providers like Microsoft are working toward a smarter yet more responsible AI, with Azure Machine Learning’s Responsible AI toolkit and its accompanying standard seeking to address accountability in AI.

With these, the enterprise’s AI solutions might undergo an overhaul, Lingaro Senior Data Scientist and AI Advisor Taylor van Valkenburg said. Data science teams must familiarize themselves with and implement responsible AI standards.

It’s not just about doing the right thing. Gartner projects that by 2026, enterprises that apply trust, risk, and security management (TRiSM) controls to AI applications will increase the accuracy of their decision-making by eliminating 80% of faulty and illegitimate information.

For leaders and decision-makers, the path forward involves drawing on modern technologies to strengthen AI TRiSM. Owners and developers of AI models, for example, can embed explainability and transparency with content anomaly detection. Adopting ModelOps helps with data protection, while reinforcing adversarial resistance in AI-powered applications strengthens their security.

Lingaro’s DS&AI practice, for example, has been serving global CPG companies as a dedicated task force that evaluates and conveys the far-reaching effects of their AI systems beyond their implementation. This enables their AI teams to align their AI projects with Microsoft’s Responsible AI framework. Lingaro’s DS&AI practice, which is a Microsoft Gold Partner, also draws on its business and technical expertise on Microsoft technologies to embed responsibility and accountability in AI and data science projects. This includes using toolkits for incorporating explainability and interpretability in AI models, assessing security risks, protecting sensitive data, simulating human-AI interactions, and performing confidential computing during deployment.

Indeed, generative AI’s potential is immense, but it needs to be directed with purpose and a clear ethical framework. AI’s potential lies not just in its computational prowess, but in its synergy with human values, wisdom, and insight.

Subscribe to our roundup of expert insights into upcoming trends and paradigm shifts in data science, AI, and machine learning that will help enterprises better utilize these innovations.

Subscribe to Lingaro AI Digest

Back
to Top