Data Science Talent Logo
Call Now

RISK MITIGATION AND THE FUTURE OF LARGE LANGUAGE MODELS IN THE ENTERPRISE MARKET BY DAMIEN DEIGHAN

Damien Deighan is CEO and Founder of Data Science Talent. With 20+ years’ experience in recruiting, Damien has served the world’s largest blue chip organisations and placed staff in over 20 countries. Damien is co-host of the popular Data Science Conversations podcast, which features insights from the world’s leading academics in data science.

Here, Damien reflects on issues organisations have experienced after adopting ChatGPT. Following incidents such as Samsung’s security breaches, how can enterprise companies take advantage of LLMs safely and effectively?

Since the launch of Chat GTP3 on November 31st 2022, the pace of development in the generative AI space has been incredible.

On 21st Feb 23 Bain & Co announced a services alliance partnership with OpenAI with the intention of embedding OpenAI’s technologies (ChatGPT, DALL·E, and Codex) into their clients operations having already done so with their own 18k workforce.  Coca Cola were swiftly announced as the first major corporate to engage with this new alliance, although interestingly no other major corporation has announced their involvement since.

Just 4 weeks later OpenAI announced their Plugins for Chat GPT and popular platforms such as Wolfram, Expedia, Klarna and Opentable were revealed as the first third party platforms to integrate.

Microsoft’s heavy investment in OpenAI and their rapid deployment of Chat GPT into their product range, added to the fact they are the trusted provider of corporate software applications, might suggest deep integration of Microsoft/OpenAI products into large companies might be inevitable.

However this is not necessarily how things are likely to pan out.  Two things happened in March 2023 that give us some clues to what might happen next instead.

What the Samsung incident means for internal use of LLMs in business. 

In early April several tech publications reported that Samsung employees leaked sensitive corporate data via Chat GPT, 3 times inside 20 days. This included a recording transcription of an internal meeting and source code for a new program in their semiconductor business unit.  The problem is in each of these instances employees decided to input proprietary information into a third party platform, thereby removing control of this information from Samsung and putting company IP at risk.

Samsung’s immediate response was to limit use of ChatGPT and announce they are developing their own AI for internal use.

ChatGPT is an incredible piece of technology and its use in business can help drive significant leaps in productivity.  However, the Samsung incident is also a clear warning to enterprise leaders of the importance of ensuring proper use of Chat GPT so that company information and IP is not shared in this way.

For more insights into the future of LLMs in enterprise, read the article in full over on our magazine:https://issuu.com/datasciencetalent/docs/the_data_scientist_mag_issue_3_digital

 

Back to blogs
Share this:
© Data Science Talent Ltd, 2024. All Rights Reserved.