Using artificial intelligence responsibly
Artificial intelligence (AI) has become an integral part of our professional lives. Whether it's chatbots, code generation tools, or automated analytics, these technologies are transforming how we work.
But their power comes with significant risks: confidentiality, security, quality of responses…
Basic reminder: AI is not intelligent
The term "artificial intelligence" is misleading.
A generative AI has no understanding of the world, no consciousness, no empathy. It applies statistical rules and models created by humans.
It doesn't understand irony, subtext, or implicit context
It does not reason: it predicts the most likely continuation of a text
It shows neither discernment nor common sense
It has no understanding of the world we live in
It cannot innovate, demonstrate creativity, in the human sense of the term, without human input.
It has no emotions, no emotional intelligence, and certainly no independent will.
On the other hand, it has extraordinary computing and information access capabilities.
It is this combination — mathematical power + imitation of language — that gives the illusion of intelligence.Because it perfectly imitates our language, it can give an illusion of intelligence, but this is not the case. This imitation is based on the analysis of the structure of language and a mathematical representation of how we communicate.
The term "Augmented Intelligence" would probably be more accurate: a tool that amplifies our capabilities, but does not replace them.
Data confidentiality: the main IT risk
AI providers (OpenAI, Anthropic, Google, Microsoft, xAI…) are very interested in your prompts.
In most cases, your prompts can be used to improve their models, sometimes through being reviewed by humans.
This can cause problems depending on the content of the prompts.
- information on internal projects
- confidential discussions
- personal data or HR information
For our organizations, this data constitutes a critical asset.
When it is sent to a publicly accessible external service, it can be exposed.
For IT managers, the challenge is no longer just preventing intrusions…
but also preventing intentional or accidental leaks to external systems.
Two sources of risk: human error and attacks
Human errors are most frequent:
- misunderstanding of sharing or history options
- use of public AI to process sensitive data
- Sending full documents (reports, database, list of donors) without realizing the consequences. At Samsung, in 2023 employees copied confidential code into ChatGPT. The data ended up being reused in responses to other users.
- Platform bugs: some private ChatGPT and Grok conversations were found indexed on Google due to bugs.
Attacks and manipulations:
AIs are becoming targets because they concentrate thousands of prompts containing sensitive information and data from companies around the world.
Researchers have demonstrated that it is possible to force a model to disclose elements of training data — therefore potentially prompts from other users.
Attacking a model can allow access to internal data from other organizations.
Best practices for securing the use of AI
- Disallow the use of your data for engine training.
All major tools offer this option:
ChatGPT: Settings → Data Management → Improve model for everyone (disable)
Gemini: Activities → Keep Activity (disable)
Claude: Settings → Privacy → Help improve Claude
Microsoft Copilot: harder to find, but available in the privacy settings.
This is the first action to take before any use.
Note that if you use Gemini with your SIL or Wycliffe account, the data is not used for engine training.
- Think before sending a prompt
We need to always ask ourselves the question: “Does what I am about to send contain sensitive information?” If so, this data should not be sent to the AI. Here are some examples of information that should not be sent:
- Personal, confidential, proprietary or sensitive information
- Financial details
- Personal and intimate thoughts
- Confidential information about your workplace
- Details regarding your place of residence
- Photos
- Systematically anonymize the information
To secure a prompt that may contain sensitive data, certain anonymization measures should be taken, such as:
- replacing the names of people, projects
- changing amounts of money (if applicable)
AI reliability: the question of hallucinations
Generative AIs have a major flaw: they invent things when they don't know.
They can:
- assert false things with certainty
- invent references, sources, events
- mix real facts and fictional elements. There are all kinds of information online: truths and lies, biased information, well-intentioned (but misguided) advice, humorous content (written factually). And AI can't distinguish between them.
Here is a real example of an AI hallucination:
“British police recently conducted a risk assessment for potential unrest at a football match between an English and an Israeli club. Following this assessment, the police decided to ban some Israeli fans from entering the stadium. This sparked a diplomatic incident between the two countries.”
The problem is that the information provided by Copilot regarding an alleged incident that had occurred previously was false. The match in question had, in reality, never taken place.
This episode serves as a reminder that AI does not verify its sources, which it can hallucinate and that the final responsibility for verifying things always lies with the human.
Systematic verification: the golden rule
For any result provided by an AI, certain checks should be carried out:
- Consult the cited sources (if available)
- Evaluate the site's reliability (reputation, author, date)
- Cross-reference with other tools: Wikipedia, Google, official websites
- Ask for confirmation by a subject matter expert in case of significant impact
AI should be seen as a starting point, never as an authority.
The specific risks of AI-powered plugins
AI-powered plugins are certainly very convenient, but they add additional risks:
- It's no longer just you and the chatbot. With plugins, intermediary providers are added who have access to prompts, can collect contextual information, and interact with external services. Furthermore, the data access rights and privacy policies of this chain of actors are often unclear.
Here are some additional risks to consider:
- Multiple points of leakage: each plugin can send data to third-party servers... which may themselves depend on other services. Each one can be attacked, for example by prompt injection. If a single link is affected, the entire chain is affected. This increases the attack surface.
- It also increases the number of tools in which the user can make handling errors.
- Total loss of control over data: prompts, browsing history, browser information, marketing data... it's like allowing giant cookies that can access and cross-reference highly personal information.
- These plugins can also be compromised, malicious, or simply obsolete.
Here are some best practices:
- Disable all non-essential AI plugins.
- Carefully control each permission granted.
- Update plugins very regularly.
- Never submit sensitive data to a plugin.
Conclusion: AI is a powerful tool… in the right hands
AI is a fantastic tool. But like any tool, it can help or harm depending on how it is used.
For IT professionals, the responsibility is twofold:
- protect our organizations' information
- Ensure that AI tools are used with discernment and control.
AI does not replace human creativity. Nor does it replace analysis, prudence, competence or intuition — qualities that remain unique to humans.
Like skilled craftspeople, we use AI as one tool among many to execute the work we do with our own creativity. We use it as an aid, but we don't ask it to do our work for us.