The dream of Artificial Intelligence has landed in our daily lives, and the ethical debates surrounding AI have grown accordingly, especially regarding how much data these AI services collect from users. After all, where there is mass storage of potentially sensitive information, there are cybersecurity and privacy concerns.
Microsoft’s Bing search engine, which is newly outfitted with OpenAI’s ChatGPT, and is currently rolling out, has raised its own concerns, as Microsoft hasn’t had the best track record when it comes to respecting its customers’ privacy.
Microsoft has at times been called into question over its management and access to user data, though far less so than its contemporaries such as Apple, Google and Facebook, even though it deals with a lot of user information – including the sale of targeted advertisements.
It has been targeted by a number of government regulators and organizations, such as when France demanded Microsoft stop tracking users through Windows 10, and the company responded with a series of comprehensive measures.
Jennifer King, director of consumer privacy at Stanford Law School’s Center for Internet and Society, speculated that this is due in part to Microsoft’s long-standing position in both its respective market and its long-standing relationships with governments afforded to it by legacy. her. It has more experience dealing with regulators, so it could have avoided the same level of scrutiny as its competitors.
A data input
Microsoft, as well as other companies, are now forced to react to a massive influx of user chat data due to the popularity of chatbots like ChatGPT. According to Telegraph, Microsoft has reviewers who analyze user submissions (opens in new tab) to limit harm and respond to potentially dangerous user input by combing user chat logs with the chatbot and moderate “misbehavior”.
The company claims it removes submissions of personal information, users’ chat texts are only accessible to certain reviewers, and these efforts protect users even when their conversations with the chatbot are under review.
A Microsoft spokesperson clarified that it uses both automated review efforts (as there is a lot of data to sift through) and manual reviewers. It then states that this is the standard for search engines and is also included in Microsoft’s privacy statement.
The spokesperson tries to reassure those concerned that Microsoft uses industry standards for user privacy, including “pseudonymization, encryption at rest, secure and approved data access management, and data retention procedures.”
Additionally, reviewers can only view user data based on “a verified business need only and not third parties.” Microsoft has since updated its privacy statement to summarize and clarify the above – user information is collected and human Microsoft employees may be able to see it.
Under the spotlight
Microsoft isn’t the only company under scrutiny for how it collects and handles user data when it comes to AI chatbots. OpenAI, the company that created ChatGPT, also revealed that it is examining user conversations.
Recently, the company behind Snapchat announced that it has introduced a chatbot equipped with ChatGPT that will resemble the already familiar messenger chat format. It has warned users not to submit personal sensitive information, possibly for similar reasons.
These concerns are multiplied when considering the use of ChatGPT and ChatGPT-enabled bots by those working at companies with their own sensitive and confidential information, many of which have warned employees not to submit confidential company information to these chatbots. Some companies, such as JP Morgan and Amazon (opens in new tab)have restricted or completely banned their use at work.
Personal user data has been and continues to be a key issue in technology in general. Misuse of data or even malicious use of data can have dire consequences for both individuals and organizations. With each introduction of a new technology, these risks increase – but so does the potential reward.
Tech companies had better pay close attention to making sure our personal data is as safe as possible — or lose the trust of their customers and potentially kill their fledgling AI ambitions.