CEO's Guide to Generative AI

ceos-guide-to-generative-ai

23 Haz 2023

13 dk okuma süresi

The emergence of powerful generative AI tools like ChatGPT, Bard, Midjourney, and others has ignited a wave of enthusiasm among business leaders. As CEOs contemplate the potential impact on their organizations, it is crucial to navigating this exciting frontier with both deliberate progress and well-informed decision-making.

The extraordinary success of the public version of ChatGPT serves as a testament to the transformative power of generative AI. Within two months, it amassed an astonishing 100 million users, revolutionizing the AI landscape with unprecedented accessibility. Unlike previous AI technologies, generative AI does not require users to possess extensive knowledge of machine learning. Its out-of-the-box accessibility enables anyone, regardless of expertise, to engage with and derive value from this groundbreaking technology. Like the personal computer's or iPhone's revolutionary impact, a single generative AI platform can spawn many applications catering to diverse audiences across age groups, educational backgrounds, and geographical locations with internet access.

Foundation models: Powering the possibilities

At the heart of generative AI chatbots lies the utilization of foundation models. These expansive neural networks are meticulously trained on vast amounts of unstructured and unlabeled data, encompassing diverse formats such as text and audio. The versatility of foundation models enables them to be employed across various tasks. In contrast, previous AI models were often limited in scope, confined to performing singular functions like predicting customer churn.

Imagine a single foundation model generating an executive summary for a 20,000-word technical report on quantum computing, devising a go-to-market strategy for an arboriculture business, and providing unique recipes utilizing ten ingredients from someone's refrigerator. However, it is essential to note that while generative AI boasts impressive capabilities, there may be occasional trade-offs in accuracy, prompting renewed attention toward AI risk management.

Enhancing and expediting business applications

With appropriate safeguards in place, generative AI has the potential to unlock novel applications and improve existing ones within businesses. Let's examine a theoretical customer sales call scenario. Traditionally, upselling opportunities were based on static customer data obtained before the call, such as demographics and purchase history. However, the introduction of generative AI brings dynamic possibilities. An AI model specially trained for this purpose can analyze the actual content of the conversation in real time, drawing insights from internal customer data, external market trends, and social media influencer data. Consequently, the generative AI tool can provide salespersons with upselling suggestions during the conversation, even offering a preliminary draft of a tailored sales pitch for further personalization.

Empowering knowledge workers

The benefits of generative AI extend beyond specific job roles, encompassing nearly every knowledge worker. While automation may be on the horizon, the true value of generative AI lies in its integration into everyday tools used by knowledge workers, such as email or word-processing software. Empowered by these enhanced tools, productivity can witness a remarkable surge, enabling professionals to achieve more with greater efficiency.

Initiating the generative AI journey

CEOs face critical decisions regarding the adoption of generative AI. Some may seize the opportunity to gain a competitive edge by embracing generative AI as a strategic companion, reimagining how work is accomplished. Others may choose a cautious approach, experimenting with a few use cases before making substantial investments. Companies must evaluate their technical expertise, technology and data architecture, operating models, and risk management processes to ensure they are well-equipped for the transformative potential of generative AI.

What makes generative AI different from other kinds of AI?

Generative AI, as its name implies, distinguishes itself from previous AI or analytics methods by its efficient ability to generate new content. This content often takes the form of "unstructured" data, such as written text or images, which don't conform to conventional tabular representations.

At the heart of generative AI lies the foundation model, and these models heavily rely on transformers, a crucial component in their architecture. GPT, in fact, stands for "generative pre-trained transformer." Transformers, which belong to artificial neural networks, undergo training using deep learning techniques. The term "deep learning" refers to incorporating numerous layers within neural networks, which has been instrumental in driving significant advancements in AI.

Nonetheless, foundation models possess certain distinguishing characteristics that set them apart from previous generations of deep learning models. First and foremost, they can be trained on vast and diverse collections of unstructured data. For instance, a specific type of foundation model known as a "large language model" can be trained on massive amounts of publicly available text from the internet, covering an extensive range of topics. While other deep learning models can handle substantial volumes of unstructured data, they are often trained on more specific datasets. For example, a model might be trained on a particular set of images to recognize specific objects within photographs.

It's worth noting that many other deep learning models are typically limited to performing a single task. For instance, they may excel at object classification in photos or specialize in making predictions. In contrast, a foundation model can carry out these tasks and generate content. The versatility of foundation models stems from their ability to learn patterns and relationships from the diverse training data they consume. This, in turn, empowers them to predict the next word in a sentence, enabling systems like ChatGPT to provide answers on various topics and facilitating the generation of images based on textual descriptions, as seen in DALL·E 2 and Stable Diffusion.

Thanks to the flexibility of foundation models, companies can employ a single model to fulfill multiple business use cases, a feat not often achievable with earlier deep learning models. For example, a foundation model that incorporates information about a company's products could be leveraged to address customer inquiries and support engineers in developing enhanced versions of those products. Consequently, companies can deploy applications faster and reap their benefits more expeditiously.

However, it's important to acknowledge that current foundation models are not universally suitable for all applications due to their underlying mechanisms. Large language models, for instance, may be susceptible to "hallucination," whereby they may provide plausible but ultimately incorrect answers. Furthermore, these models do not always provide explicit reasoning or sources for their responses. Consequently, caution must be exercised when integrating generative AI without human oversight in applications where errors could result in harm or when explainability is required. Additionally, generative AI is ill-suited for directly analyzing extensive amounts of tabular data or solving complex numerical optimization problems. Researchers are actively striving to address these limitations.

Integrating generative AI into business

CEOs should view the exploration of generative AI as an imperative rather than an option. This technology can create value across a wide spectrum of use cases. The economic and technical barriers to entry are not prohibitive. Still, the consequences of inaction could lead to falling behind competitors rapidly. Each CEO should collaborate with their executive team to evaluate where and how to incorporate generative AI. Some CEOs may recognize generative AI as a transformative opportunity for their companies, offering the chance to revolutionize everything from research and development to marketing, sales, and customer operations. Others prefer to start with small-scale implementations and gradually expand. Once the decision is made, AI experts can follow specific technical pathways to execute the strategy based on the identified use cases.

Most of the benefits derived from generative AI within an organization will stem from employees leveraging the embedded features in their existing software. For example, email systems could provide options for generating initial drafts of messages, productivity applications could automatically create the first draft of a presentation based on a given description, financial software could generate written summaries highlighting notable features in a financial report, and customer-relationship-management systems could suggest effective ways to interact with customers. These features can enhance knowledge workers' productivity across various domains.

However, generative AI can have a more transformative impact in certain use cases. Let's explore some real-world examples of how companies from different industries leverage generative AI to reshape their work processes. These examples shared by McKinsey range from low-resource implementations to more resource-intensive endeavors:

 

Software engineering

The first example is a case of relatively low complexity that offers immediate productivity benefits without extensive customization. The company in question faces a backlog of feature requests and bug fixes due to a shortage of skilled software engineers. To enhance the productivity of their engineers, they are adopting an AI-powered code completion tool that seamlessly integrates with their coding software.

With this solution, engineers can provide code descriptions in natural language, and the AI system suggests multiple variations of code blocks that fulfill the given description. Engineers can then choose one of the AI's proposals, make necessary refinements, and easily insert the code into their projects.

Research conducted by McKinsey has demonstrated that such tools can accelerate code generation by up to 50 percent for developers. Furthermore, they can assist in debugging processes, ultimately enhancing the quality of the final product. However, it is important to note that generative AI cannot replace the expertise of skilled software engineers. More experienced engineers tend to derive the greatest productivity benefits from these tools. In contrast, less experienced developers may observe less impressive and occasionally negative outcomes. It is crucial to involve software engineers in the process to ensure the code's quality and security, as there is a known risk that AI-generated code may contain vulnerabilities or other bugs. Mitigating such risks should be a priority (refer to the final section of this article for recommended risk-mitigation strategies).

The cost of implementing this off-the-shelf generative AI coding tool is relatively affordable, and the time required to bring it to market is short since the product is readily available and does not demand extensive in-house development. The specific cost may vary depending on the software provider. Still, fixed-fee subscriptions typically range from $10 to $30 per monthly user. When selecting a tool, discussing licensing and intellectual property matters with the provider is essential to ensure compliance and avoid potential violations.

To support the adoption of the new tool, a small cross-functional team is dedicated to tasks such as selecting the software provider and monitoring performance, including vigilance for intellectual property and security concerns. Implementing the tool necessitates only workflow and policy adjustments. Since the tool operates as a readily available software-as-a-service (SaaS) solution, additional costs related to computing and storage are minimal or non-existent.

Customer support

The next level of sophistication involves the fine-tuning of a foundation model. In this example, a company utilizes a foundation model specifically optimized for conversational purposes. They fine-tune this model using their own high-quality customer chats and domain-specific questions and answers as they operate within a sector that employs specialized terminology (e.g., law, medicine, real estate, finance). Providing fast customer service is a crucial factor in gaining a competitive edge.

The company's customer support representatives handle a substantial volume of inbound inquiries daily. However, response times occasionally exceeded desirable levels, leading to customer dissatisfaction. The company introduced a generative AI customer-service bot that could handle most customer requests to address this. The objective was to provide prompt responses consistent with the company's brand and tailored to customer preferences. As part of the fine-tuning and testing process for the foundation model, it was essential to ensure that the bot's responses aligned with domain-specific language, brand identity, and the desired tone set by the company. Continuous monitoring was required to evaluate the system's performance, including customer satisfaction.

The company created a product roadmap with several phases to minimize potential model errors. In the initial phase, the chatbot was internally piloted, allowing employees to provide feedback by indicating whether the model's suggestions were satisfactory. This input helped the model learn and improve. Subsequently, the model "listened" to customer support conversations and offered suggestions. Once the technology had undergone sufficient testing, the second phase commenced, shifting the model's focus to customer-facing scenarios with human supervision. Eventually, as leaders gained complete confidence in the technology, the system could be largely automated.

In this case, generative AI-enabled service representatives concentrate on handling higher-value and complex customer inquiries, increasing efficiency and job satisfaction among the representatives. The implementation of the bot elevated service standards and improved overall customer satisfaction. Unlike conventional customer chatbots, this AI-powered solution could access comprehensive internal customer data and "remember" past conversations, including phone calls. This represented a significant advancement in comparison to existing customer service chatbots.

Realizing the benefits outlined above required substantial investments in software, cloud infrastructure, technical talent, and heightened levels of internal coordination regarding risk and operations. Generally, fine-tuning foundation models costs two to three times more than building software layers on top of an API. The increased costs encompass talent acquisition and third-party expenses related to cloud computing (if fine-tuning a self-hosted model) or API usage (if fine-tuning via a third-party API). To successfully implement the solution, the company relied on the expertise of DataOps and MLOps professionals and contributions from other departments such as product management, design, legal, and customer service specialists.

Drug discovery

The most complex and customized generative AI use cases arise when suitable foundation models are unavailable, requiring the company to build one from scratch. This scenario often occurs in specialized sectors or when working with unique data sets significantly different from the data used to train existing foundation models. The pharmaceutical industry provides an example of training a foundation model from scratch, presenting significant technical, engineering, and resource challenges. However, the potential return on investment from using a higher-performing model should outweigh the financial and human capital costs.

In this particular case, research scientists in drug discovery at a pharmaceutical company needed to determine the next experiments to conduct based on microscopy images. They possessed a vast data set of millions of such images, containing valuable visual information on cell features relevant to drug discovery but difficult for humans to interpret. These images were utilized to assess potential therapeutic candidates.

To expedite its research and development efforts, the company opted to develop a tool that would aid scientists in understanding the relationship between drug chemistry and recorded microscopy outcomes. Since multimodal models in this field were still nascent, the decision was made to train a custom model. The team leveraged both real-world images commonly used to train image-based foundation models and their extensive internal collection of microscopy images to build the model.

The trained model proved valuable by predicting which drug candidates were more likely to yield favorable outcomes and enhancing the accuracy in identifying relevant cell features crucial for drug discovery. This led to more efficient and effective drug discovery processes, reducing the time to achieve results and minimizing inaccurate, misleading, or failed analyses.

Generally, training a model from scratch costs ten to twenty times more than building software around a model API. The increased costs stem from larger teams, including expertise from machine learning specialists with PhD-level qualifications, as well as higher expenses in computing power and storage. The projected cost of training a foundation model varies widely depending on the desired performance level and modeling complexity. These factors influence the required data set size, team composition, and compute resources. In this specific use case, the engineering team and ongoing cloud expenses constituted most of the costs.

The company recognized the need for significant updates to its tech infrastructure and processes. This involved obtaining access to multiple GPU instances for model training, utilizing tools for distributed training across numerous systems, and implementing best practices in MLOps to optimize cost and project duration. Additionally, extensive data processing work was necessary, including data collection, integration (ensuring consistency in format and resolution across different data sets), and cleaning (filtering low-quality data, removing duplicates, and ensuring data distribution aligns with the intended use). As the foundation model was trained from scratch, rigorous testing of the final model was crucial to ensure accurate and safe output.

The takeaway for CEOs

The examples presented here provide valuable lessons for CEOs venturing into the range of generative AI.

These use cases demonstrate practical benefits and value-creation opportunities across various industries. CEOs should recognize that generative AI can have a transformative impact on jobs and the workplace. Companies can start small or aim for larger-scale implementations depending on their goals and aspirations.

The costs of pursuing generative AI can vary significantly depending on the specific use case and the resources required, such as software, cloud infrastructure, technical expertise, and risk mitigation. It is crucial for companies to carefully consider the financial implications and allocate appropriate resources based on the complexity and potential returns of each use case.

Regardless of the use case, companies must address risk issues associated with generative AI implementation. Certain use cases may necessitate more extensive risk mitigation measures compared to others. It is essential for CEOs to proactively identify and address potential risks to ensure the successful integration of generative AI within their organizations.

While there may be a sense of urgency to get started quickly, it is beneficial for companies to develop a basic business case before diving into generative AI initiatives. This approach allows CEOs to assess the feasibility, costs, and potential benefits of implementing generative AI within their specific organizational context. A well-defined business case will help guide decision-making and enable a smoother journey into generative AI.

By considering these takeaways, CEOs can make informed decisions and effectively navigate their organizations' generative AI journeys, unlocking the transformative potential of this technology while managing associated costs and risks.

İlgili Postlar

Trend Watch hybrid work shows no signs of slowing

Trend Watch: Hybrid work shows no signs of slowing

24 Eki 2024

Digital Transformation
Success Stories

Technical Support

444 5 INV

444 5 468

‍info@innova.com.tr