The future of generative AI is niche, not generalized

The future of generative AI is niche, not generalized

Whether or not this really amounts to an “iPhone moment” or a serious threat to Google search isn’t obvious at present — while it will likely push a change in user behaviors and expectations, the first shift will be organizations pushing to bring tools trained on large language models (LLMs) to learn from their own data and services.

And this, ultimately, is the key — the significance and value of generative AI today is not really a question of societal or industry-wide transformation. It’s instead a question of how this technology can open up new ways of interacting with large and unwieldy amounts of data and information.

OpenAI is clearly attuned to this fact and senses a commercial opportunity: although the list of organizations taking part in the ChatGPT plugin initiative is small, OpenAI has opened up a waiting list where companies can sign up to gain access to the plugins. In the months to come, we will no doubt see many new products and interfaces backed by OpenAI’s generative AI systems.

While it’s easy to fall into the trap of seeing OpenAI as the sole gatekeeper of this technology — and ChatGPT as the go-to generative AI tool — this fortunately is far from the case. You don’t need to sign up on a waiting list or have vast amounts of cash available to hand over to Sam Altman; instead, it’s possible to self-host LLMs.

This is something we’re starting to see at Thoughtworks. In the latest volume of the Technology Radar — our opinionated guide to the techniques, platforms, languages and tools being used across the industry today — we’ve identified a number of interrelated tools and practices that indicate the future of generative AI is niche and specialized, contrary to what much mainstream conversation would have you believe.

Unfortunately, we don’t think this is something many business and technology leaders have yet recognized. The industry’s focus has been set on OpenAI, which means the emerging ecosystem of tools beyond it — exemplified by projects like GPT-J and GPT Neo — and the more DIY approach they can facilitate have so far been somewhat neglected. This is a shame because these options offer many benefits. For example, a self-hosted LLM sidesteps the very real privacy issues that can come from connecting data with an OpenAI product. In other words, if you want to deploy an LLM to your own enterprise data, you can do precisely that yourself; it doesn’t need to go elsewhere. Given both industry and public concerns with privacy and data management, being cautious rather than being seduced by the marketing efforts of big tech is eminently sensible.

A related trend we’ve seen is domain-specific language models. Although these are also only just beginning to emerge, fine-tuning publicly available, general-purpose LLMs on your own data could form a foundation for developing incredibly useful information retrieval tools. These could be used, for example, on product information, content, or internal documentation. In the months to come, we think you’ll see more examples of these being used to do things like helping customer support staff and enabling content creators to experiment more freely and productively.

If generative AI does become more domain-specific, the question of what this actually means for humans remains. However, I’d suggest that this view of the medium-term future of AI is a lot less threatening and frightening than many of today’s doom-mongering visions. By better bridging the gap between generative AI and more specific and niche datasets, over time people should build a subtly different relationship with the technology. It will lose its mystique as something that ostensibly knows everything, and it will instead become embedded in our context.

How to become successful

For more technology Updates

Latest Jobs in Pakistan

Best Scholarships for Needy students

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *