Microsoft Corp. President Brad Smith hinted that the company is taking a different approach than other technology companies in dealing with disinformation in an interview with Bloomberg News by saying that the company won’t label social media posts that appear to be false to avoid the perception that the company is attempting to censor speech online.
In response to a question on Microsoft’s involvement in defining disinformation, Smith stated, “I don’t think that people want governments to tell them what is true or untrue. Additionally, I don’t believe they are very interested in hearing from IT businesses.
The remarks represent Smith’s clearest evidence yet that Microsoft is using a novel strategy to monitor and thwart attempts at digital propaganda.
Facebook and Twitter Inc., both owned by Meta Platforms Inc., have come under fire for their efforts to identify and take down false or misleading material from their websites and applications. Since US legislators claim that social media corporations limit right-wing views, the issue of truth has become politicized. In contrast, the US Department of Homeland Security shut down its own misinformation office earlier this year in response to public uproar.
Recent investments in information operation analysts and technologies to monitor disinformation efforts have been made by Microsoft, the company that runs the Bing search engine and LinkedIn social network. These experts are collaborating with Microsoft’s cybersecurity teams, who have assisted the business in thwarting alleged Russian, Iranian, Chinese, and North Korean state hackers by destroying the infrastructure required to maintain the active status of the harmful software.
Tom Burt, corporate vice president for customer security and trust, stated, “We’ll be looking into how we might accomplish it in the context of influence operations.”
Microsoft is now concentrating on detecting misinformation tactics that are publicized and that target its private and public sector clients. Although specifics of the strategy are still being worked out, Smith indicated that its main objective is to be “open.”
Microsoft’s policy team will share its propaganda-related findings with foreign governments, much way it now does with its cybersecurity incident reports, to persuade leaders to adopt a set of guidelines for nation-state behavior online.
People are more likely to take action and engage in discourse about the actions that international governments should take to solve these concerns if they are informed about what is happening, according to Burt.
Microsoft released a study this year on Russian cyber-espionage targeting Ukrainian targets, stating that the intrusions were carried out in conjunction with military actions and misinformation campaigns. In one instance, it was claimed that hackers stole material from companies in the nuclear industry to help the military and state-run media spread lies about Ukraine’s supposed development of chemical and biological weapons and to support the capture of nuclear power stations by soldiers.
The business also said that it will remove the RT app from its Windows app store and only return links from RT and Sputnik “when a user wishes to visit those pages” to lessen the prominence of state-sponsored Russian media channels.
Microsoft, according to Smith, intended to provide the public with additional details about who was speaking and what they were saying so that they could decide for themselves whether the content was accurate.