Google opposes Facebook-backed proposal for self-regulatory body in India – sources – Arab News

https://arab.news/vfgys
NEW DELHI: Google has grave reservations about developing a self-regulatory body for the social media sector in India to hear user complaints, though the proposal has support from Facebook and Twitter, sources with knowledge of the discussions told Reuters.
India in June proposed appointing a government panel to hear complaints from users about content moderation decisions, but has also said it is open to the idea of a self-regulatory body if the industry is willing.
The lack of consensus among the tech giants, however, increases the likelihood of a government panel being formed — a prospect that Meta Platforms Inc’s Facebook and Twitter are keen to avoid as they fear government and regulatory overreach in India, the sources said.
At a closed-door meeting this week, an executive from Alphabet Inc’s Google told other attendees the company was unconvinced about the merits of a self-regulatory body. The body would mean external reviews of decisions that could force Google to reinstate content, even if it violated Google’s internal policies, the executive was quoted as saying.
Such directives from a self-regulatory body could set a dangerous precedent, the sources also quoted the Google executive as saying.
The sources declined to be identified as the discussions were private.
In addition to Facebook, Twitter and Google, representatives from Snap Inc. and popular Indian social media platform ShareChat also attended the meeting. Together, the companies have hundreds of millions of users in India.
Snap and ShareChat also voiced concern about a self-regulatory system, saying the matter requires much more consultation including with civil society, the sources said.
Google said in a statement it had attended a preliminary meeting and is engaging with the industry and the government, adding that it was “exploring all options” for a “best possible solution.”
ShareChat and Facebook declined to comment. The other companies did not respond to Reuters requests for comment.
THORNY ISSUE
Self-regulatory bodies to police content in the social media sector are rare, though there have been instances of cooperation. In New Zealand, big tech companies have signed a code of practice aimed at reducing harmful content online.
Tension over social media content decisions has been a particularly thorny issue in India. Social media companies often receive takedown requests from the government or remove content proactively. Google’s YouTube, for example, removed 1.2 million videos in the first quarter of this year that were in violation of its guidelines, the highest in any country in the world.
India’s government is concerned that users upset with decisions to have their content taken down do not have a proper system to appeal those decisions and that their only legal recourse is to go to court.
Twitter has faced backlash after it blocked accounts of influential Indians, including politicians, citing violation of its policies. Twitter also locked horns with the Indian government last year when it declined to comply fully with orders to take down accounts the government said spread misinformation.
An initial draft of the proposal for the self-regulatory body said the panel would have a retired judge or an experienced person from the field of technology as chairperson, as well as six other individuals, including some senior executives at social media companies.
The panel’s decisions would be “binding in nature,” stated the draft, which was seen by Reuters.
Western tech giants have for years been at odds with the Indian government, arguing that strict regulations are hurting their business and investment plans. The disagreements have also strained trade ties between New Delhi and Washington.
US industry lobby groups representing the tech giants believe a government-appointed review panel raises concern about how it could act independently if New Delhi controls who sits on it.
The proposal for a government panel was open to public consultation until early July. No fixed date for implementation has been set.
LONDON: Australia’s competition watchdog said on Friday that Alphabet Inc’s Google unit was ordered by the country’s Federal Court to pay A$60 million ($42.7 million) in penalties for misleading users on collection of their personal location data.
The court found Google misled some customers about personal location data collected through their Android mobile devices between January 2017 and December 2018.
Google misled users into believing “location history” setting on their android phones was the only way location data could be collected by it, when a feature to monitor web and applications activity also allowed local data collection and storage, the Australian Competition & Consumer Commission (ACCC) said.
The watchdog, which estimates that 1.3 million Google account users in Australia may have been affected, had started the proceedings against the company and its local unit in October 2019.
Google took remedial measures in 2018, the regulator said.
In an emailed statement, Google said it had settled the matter and added it has made location information simple to manage and easy to understand.
The search engine giant has been embroiled in legal action in Australia over the past year as the government mulled and passed a law to make Google and Meta Platforms’ Facebook pay media companies for content on their platforms.
LONDON: Twitter Inc. on Thursday set out a plan to combat the spread of election misinformation that revives previous strategies, but civil and voting rights experts said it would fall short of what is needed to prepare for the upcoming US midterm elections.
The social media company said it will apply its civic integrity policy, introduced in 2018, to the Nov. 8 midterms, when numerous US Senate and House of Representatives seats will be up for election. The policy relies on labeling or removing posts with misleading content, focused on messages intended to stop voting or claims intended to undermine public confidence in an election.
In a statement, Twitter said it has taken numerous steps in recent months to “elevate reliable resources” about primaries and voting processes. Applying a label to a tweet also means the content is not recommended or distributed to more users.
The San Francisco-based company is currently in a legal battle with billionaire Elon Musk over his attempt to walk away from his $44-billion deal to acquire Twitter.
Musk has called himself a “free speech absolutist,” and has said Twitter posts should only be removed if there is illegal content, a view supported by many in the tech industry.
But civil rights and online misinformation experts have long accused social media and tech platforms of not doing enough to prevent the spread of false content, including the idea that President Joe Biden did not win the 2020 election.
They warn that misinformation could be an even greater challenge this year, as candidates who question the 2020 election are running for office, and divisive rhetoric is spreading following an FBI search of former President Donald Trump’s Florida home earlier this week.
“We’re seeing the same patterns playing out,” said Evan Feeney, deputy senior campaign director at Color of Change, which advocates for the rights of Black Americans.
In the blog post, Twitter said a test of redesigned labels saw a decline in users’ retweeting, liking and replying to misleading content.
Researchers say Twitter and other platforms have a spotty record in consistently labeling such content.
In a paper published last month, Stanford University researchers examined a sample of posts on Twitter and Meta Platforms’ Facebook that altogether contained 78 misleading claims about the 2020 election. They found that Twitter and Facebook both consistently applied labels to only about 70 percent of the claims.
In a statement, Twitter said it has taken numerous steps in recent months to “elevate reliable resources” about primaries and voting processes.
Twitter’s efforts to fight misinformation during the midterms will include information prompts to debunk falsehoods before they spread widely online.
More emphasis should be placed on removing false and misleading posts, said Yosef Getachew, media and democracy program director at nonpartisan group Common Cause.
“Pointing them to other sources isn’t enough,” he said.
Experts also questioned Twitter’s practice of leaving up some tweets from world leaders in the name of public interest.
“Twitter has a responsibility and ability to stop misinformation at the source,” Feeney said, saying that world leaders and politicians should face a higher standard for what they tweet.
Twitter leads the industry in releasing data on how its efforts to intervene against misinformation are working, said Evelyn Douek, an assistant professor at Stanford Law School who studies online speech regulation.
Yet more than a year after soliciting public input on what the company should do when a world leader violates its rules, Twitter has not provided an update, she said.
Mostafa Al-Ahmad, one of the three prominent Iranian filmmakers arrested during Iran’s latest crackdown on dissent, has been released on bail, according to Radio Farda. 
The 52-year-old was arrested along with filmmakers Mohammad Rasoulof and Jafar Panahi in June, days after signing an open letter calling on security forces in the country to “lay down their arms” during widespread demonstrations over “corruption, theft, inefficiency, and repression,” Radio Farda reported. 
Al-Ahmad and Panahi had reportedly contracted COVID-19 in Tehran’s notorious Evin prison and were denied hospital care outside the detention facility, the report said. 
Outrage erupted across the country after more than 40 people were killed in May when a 10-storey building collapsed in the southwestern city of Abadan. At the time, public outcry called for corrupted authorities to be held accountable. 
The filmmakers’ arrest sparked international criticism from European film and arts festivals, including the Cannes Film Festival. 
“The Festival de Cannes strongly condemns these arrests as well as the wave of repression obviously in progress in Iran against its artists,” festival organizers said. “The festival calls for the immediate release of Mohammad Rasoulof, Mostafa [Al-Ahmad] and Jafar Panahi.”
“The Festival de Cannes also wishes to reassert its support to all those who, throughout the world, are subjected to violence and repression. The festival remains and will always remain a haven for artists from all over the world and it will relentlessly be at their service in order to convey their voices loud and clear, in the defense of freedom of creation and freedom of speech.”
DUBAI: The Arab and Middle Eastern Journalists Association has launched a series of awards to highlight exceptional work by and about Arab, Middle Eastern and North African communities.
“Promoting accurate and nuanced coverage of the Middle East and North Africa regions and people is at the core of our mission,” said Hoda Osman, AMEJA president.
“We’re excited to launch the AMEJA awards so we can lift up exceptional news coverage by journalists working tirelessly to get the story right.”
The awards program includes three awards: Best coverage of the MENA region; best coverage of MENA immigrant and heritage communities in North America; and the Walid El-Gabry Memorial Award, named after one of AMEJA’s founders to recognize the work of an AMEJA member.
Each winner will receive a $500 cash prize.
The first two awards are open to all journalists.
Entries will be judged by a jury panel, including Mohamad Bazzi, NYU journalism professor and director of the Kevorkian Center for Near Eastern Studies; Nima Elbagir, CNN chief international investigative correspondent; Leila Fadel, host of NPR’s Morning Edition; Kareem Fahim, Middle East bureau chief for The Washington Post; Ayman Mohyeldin, MSNBC host of the show “Ayman”; and Jason Rezaian, columnist at The Washington Post and host of the 544 Days podcast.
The Walid El-Gabry Memorial Award will be voted on by AMEJA’s members.
AMEJA is accepting submissions until Aug. 28. To be eligible, the work must have been published, in English, between Jan. 1, 2021, and Aug. 1, 2022. Entries can be submitted in any format from print to podcasts.
Winners will be announced in the fall of this year.
LONDON: Facebook is under intense scrutiny after handing in private messages of a 17-year-old girl accused of crimes relating to an abortion to Nebraska police.
The teenager is accused, along with her mother, of having broken the law that prohibits abortion after 20 weeks. According to court files, the teenager miscarried at 23 weeks of pregnancy and secretly buried the fetus with her mother’s help.
The two were charged in July with allegedly removing, concealing or abandoning a dead human body, concealing the death of another person and false reporting.
Authorities obtained incriminatory messages between the mother and daughter after they approached Facebook with a search warrant.
Facebook reportedly had the option of challenging the court’s decision but chose to provide police access to the teen’s direct messages instead. The teenager is currently facing three criminal charges as a result of using an abortion pill purchased online and burying the unborn fetus.
“Nothing in the valid warrants we received from local law enforcement in early June, prior to the Supreme Court decision, mentioned abortion. The warrants concerned charges related to a criminal investigation and court documents indicate that police at the time were investigating the case of a stillborn baby who was burned and buried, not a decision to have an abortion,” Meta Spokesperson Andy Stone said in a statement.
This case represents one of the first instances in which a person’s social media activity has been used against them in a state where access to abortion is restricted, and it is perceived as a stab in the back after tech companies vowed to protect users in the wake of the US Supreme Court’s overturning of Roe v. Wade.
The news comes just a few weeks after Meta CEO Mark Zuckerberg pledged to “expand encryption across the platform in an effort to keep people safe.” Meta also said it would offer financial assistance to employees having to travel to a different state to seek an abortion.

source

Leave a Comment