I’m a cybersecurity expert who likes using AI. But I’d never share a few things with ChatGPT or its competitors.

I'm a cybersecurity expert who likes using AI. But I'd never share a few things with ChatGPT or its competitors.


This as-told-to essay is based on a conversation with Sebastian Gierlinger, vice president of engineering at Storyblok, a content management system company of 240 employees based in Austria. It has been edited for length and clarity.

I’m a security expert and a vice president of engineering at a content management system company, which has Netflix, Tesla, and Adidas among its clients.

I think that artificial intelligence and its most recent developments are a boon to work processes, but the newer capabilities of these generative AI chatbots also require more care and awareness.

Here are four things I would keep in mind when interacting with AI chatbots like OpenAI’s ChatGPT, Google’s Gemini, Anthropic’s Claude, or Perplexity AI.

Liken it to using social media

An important thing to remember when using these chatbots is that the conversation is not only between you and the AI.

I use ChatGPT and similar large language models myself for holiday suggestions and type prompts like: “Hey, what are great sunny locations in May with decent beaches and at least 25 degrees.”

But problems can come up if I am too specific. The company can use these details to train the next model and someone could ask the new system details about me, and parts of my life become searchable.

The same is true for sharing details about your finances or net worth with these LLMs. While we haven’t seen a case where this has happened, personal details being fed into the system, and then revealed in searches would be the worst outcome.

There could already be models where they are able to calculate your net worth based on where you live, what industry you are in, and spare details about your parents and your lifestyle. That’s probably enough to calculate your net worth and if you are a viable target or not for scams, for example.

If you are in doubt about what details to share, ask yourself if you would post it on Facebook. If your answer is no, then don’t upload it to the LLM.

Follow company AI guidelines

As using AI in the workplace becomes common for tasks like coding or analysis, it is crucial to follow your company’s AI policy.

For example, my company has a list of confidential items that we are not allowed to upload to any chatbot or LLM. This includes information like salaries, information on employees, and financial performance.

We do this because we don’t want somebody to type in prompts like “What is Storyblok’s business strategy” and ChatGPT proceeds to spit out “Story Block is currently working on 10 new opportunities, which is company 1, 2, 3, 4, and they are expecting a revenue of X, Y, Z dollars in the next quarter.” That would be a huge problem for us.

For coding, we have a policy that AI like Microsoft’s Copilot cannot be held responsible for any code. All code produced by AI must be checked by a human developer before it is stored in our repository.

Using LLMs with caution at work

In reality, about 75% of companies don’t have an AI policy yet. Many employers have also not subscribed to corporate AI subscriptions and have just told their employees: “Hey, you’re not allowed to use AI at work.”

But people resort to using AI with their private accounts because people are people.

This is when being careful about what you input into an LLM becomes important.

In the past, there was no real reason to upload company data to a random website. But now, employees in finance or consulting who would like to analyze a budget, for example, could easily upload company or client numbers into ChatGPT or another platform and ask it questions. They would be giving up confidential data without even realizing it.

Differentiate between chatbots

It is also important to differentiate between AI chatbots since they are not all built the same.

When I use ChatGPT, I trust that OpenAI and everyone involved in its supply chain do their best to ensure cybersecurity and that my data won’t leak to bad actors. I trust OpenAI at the moment.

The most dangerous AI chatbots, in my opinion, are the ones that are homegrown. They are found on airline or doctors’ websites and they may not be investing in all the security updates.

For example, a doctor may include a chatbot on his website to do an initial triage, and the user may start inserting very personal health data that could let others know of their illnesses if the data is breached.

As AI chatbots become more humanlike, we are swayed to share more and open up to topics we would not have before. As a general rule of thumb, I would urge people not to blindly use every chatbot they come across, and stay away from being too specific regardless of which LLM they are talking to.

Do you work in tech or cybersecurity and have a story to share about your experience using AI? Get in touch with this reporter: shubhangigoel@insider.com.



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.