The Risks of Misleading AI Models: A Cautionary Tale for Entrepreneurs

Today’s AI technology is one of the biggest buzzwords of this decade and it was quickly absorbed into our lives, whether we are talking about personal space or professional life from image-video editing to summarizing big articles to data analysis. Worldwide, AI startups clocked funding of $498 billion in H1 2023 with an expected CAGR of 37.7%. While big tech companies such as OpenAI, Google, and others are training and optimizing their latest Large Language Models (LLMs), surpassing 1 trillion parameters (1.76 trillion in GPT-4), and ultimately aiming to develop Artificial General Intelligence (AGI) to establish market leadership, Small-sized companies are also creating custom models or fine-tuning Language Model to provide products & services to tackle the main pain points in various industries. In this blog, we provide examples/news to prove that even tech giants are not free from common issues.

Hype Versus Reality

From customer service chatbots to inventory management and predictive data analysis, we are led to believe that machines will inevitably dictate the way we run businesses in the near future. This is not entirely true. Most AI, for instance, is not comparable to humans in terms of logic development, sentiments, and creativity. They are prone to misleading, false information, and human interaction is needed to supervise and correct the mistakes of the AI. For example, the latest Google’s AI suggested adding non-toxic glue to prevent cheese from sliding, and now Google is manually removing such weird answers.

Funding

In the last 4-5 years, startups having buzzwords like ‘artificial intelligence’ and ‘automation’ received a lot more funding compared to the rest, and companies are stuffing these words into their product descriptions and marketing to get more funding. There is a trend observed of FAKE AI startups in which publicity stunts are done to get attention for funding with advertisements making exaggerated claims about their capabilities and usually come with hidden limitations. Recently, the founder of an AI-powered hiring startup was charged with fraud; the bankrupt startup raised capital for exaggerated capabilities and used fake testimonials.

Inadequate Performance and Bias Issues

Another critical issue with misleading AI models is their performance and inherent biases. Training and testing an AI model is a resource-intensive task that requires high computational power and deep learning expertise. Small to mid-sized startups are taking alternate approach and using pre-trained LLMs for domain adaptation by taking a balanced approach between soft-prompting, parameter-efficient fine-tuning, or custom data training. These are cost-effective approaches that require high expertise to avoid prompt injection, memorization, and poor pre-training. Any mistake during these steps can lead to flawed outputs such as inaccurate predictions or inappropriate behavior, especially for a domain-specific AI model will result in business loss and even potential legal action against the business. Air Canada was recently asked to pay compensation for chatbot lies.

Transparency

Lack of transparency in AI systems, particularly in deep learning models, can be complex and difficult to interpret. Sometimes it can be the case that behind the AI face there is a third-party analytics site or use of open-source frameworks that are prohibited for commercial or derivative use. Startups might unknowingly infringe on terms creating legal risks for them and their customers.

AI Ethics

AI is trained on data and AI is as good as the data it is trained on. While training, the focus should also be on ethical development so AI can differentiate, filter and block any inappropriate requests made especially for Generative AI (GenAI) as an input and output. Models should be rigid enough not to give out any sensitive and important data. This problem is faced not only by SMEs but also by big tech giants. For instance, Google has an AI “nudifier” problem in which YouTube hosted more than 100+ videos promoting inappropriate content.

image-robot-standing-desk-with-document_869640-572435
  1. Before adopting any AI model, one should take precautions, especially with those that claim to be fully AI. AI is developing but not smart enough to make logical decisions. To identify if the AI is fake, there are certain things that management can do:

1. Read the product information & credentials carefully and include their own experts during the demo.

2. Investigate the company’s history and its employee’s specialization.

3. Read past testimonials, case studies, and success stories.

4. A genuine team will answer your queries and try to explain their AI model in simple terms just to ensure customer understanding.

Conclusion

The potential for AI-driven transformation continues to grow, capturing the attention of investors and entrepreneurs alike. Management needs to understand their use cases and technologies rather than blindly trusting advertised products, as current AI systems are not capable of replacing human intelligence. Current AI technologies are far from achieving Artificial General Intelligence (AGI) in the near future. However, as AI technologies develop and we approach AGI, AI will become increasingly capable of handling more complex tasks with greater logic and innovation, reshaping the future of business and society.