Can we strike a balance between digital privacy and online security?

How much of our personal information are we willing to give up for the promise of a safer online experience? We take a look as part of Drum data deep dive.

Simply put, digital privacy is about all of the rights we have in the use of our personal data and information. Online security refers to how our data is protected and potentially used when needed. It can be a little confusing, but the privacy paradox is no longer something ordinary people take lightly – just look at the backlash that Facebook and its WhatsApp and Instagram platforms have faced.

“The idea that we ‘trade’ data for anything misunderstands the nature of data itself, because data is a collection of ones and zeros that can easily be copied,” says Doc Searls, who directs ProjectVRM at the Berkman Center for Internet and Society at Harvard. He quotes Wired editor-in-chief Kevin Kelly’s joke that “the Internet is the world’s greatest copying machine.”

“From an economic perspective, data is a public good,” says Searls. “These drawbacks claim that it should be property and that we can trade it for something else.”

Searls notes that the system has been down from the start. “This does not imply any agency for individuals to protect their own privacy online, or to be able to assert that agency on a large scale in all organizations with which they engage. “

He adds, “Life was no different in the natural world before we civilized it, starting with the privacy technologies we call clothing and shelter thousands of years ago.

“Meanwhile, the digital world is only decades old and we don’t have the equivalents of clothing and shelter there yet, beyond the choice of staying completely offline (with encrypted storage) or sending messages in encrypted form. Both of these approaches are much more useful to the wizards among us than to the rest of us Muggles.

So what are the most controversial privacy concerns of our time?

1. Facial recognition

The moderation and censorship of social media content can be a divisive issue. Last May, the Online Security Bill, which gives Ofcom the power to punish social networks that fail to remove “lawful but harmful” content, was introduced in the UK parliament. He was praised by many activists for the safety of children but condemned by civil liberty organizations.

In early November 2021, Facebook’s new parent company, Meta, said it would no longer use facial recognition software to identify faces in photographs and videos after growing concerns over the technology. If users opted for the software, they would be notified if another user posted an image or video with them.

While the technology can help with fraud and identity theft, there have been several complaints in recent years accusing the company of creating and storing face scans without permission.

“We still see facial recognition technology as a powerful tool, for example for people who need to verify their identity or to prevent fraud and identity theft. We believe facial recognition can help for products like these with privacy, transparency and control in place, so you decide if and how your face is used, ”says Jerome Pesenti, vice president of intelligence artificial at Facebook.

“But the many specific cases where facial recognition can be useful must be weighed against growing concerns about the use of this technology as a whole.”

Regarding the challenges social media companies face, Jim Fournier, CEO and Founder of Tru Social Inc, notes that there are two huge ones. “The first is that the targeted advertising business model itself is based on tracking and profiling. This is fundamentally at odds with privacy. The second being that social media relies on a central algorithm requiring centralized moderation, which is by definition centralized censorship as well.

2. Covid-19 contact tracing applications

Technology has played a major role in the ongoing recovery from the global pandemic, especially in the health, social and business sectors. In the UK, the NHS Covid-19 app contains details of relevant test results (for 14 days) and lets you know if you’ve been in close contact with someone who has since tested positive.

Many people have questioned the effectiveness of the app and there are huge flaws, especially when considering more vulnerable groups such as the elderly or homeless people who may not have no access to smartphones.

In the United States, the adoption of legislation seems unlikely. “That might not be terrible given how misunderstood most members of Congress are about the problem. Enforcement of existing antitrust laws would be a good place to start,” Fournier notes.

3. Apple tool for spotting child sexual abuse images

Last August, Apple announced the introduction of new security measures focused on finding child pornography (CSAM) on the devices of US customers. In a statement, the tech giant said, “Our goal is to create technology that empowers people and enriches their lives – while helping them stay safe. We want to help protect children from predators who use communication tools to recruit and exploit them.

The development has met mixed reviews, with some questioning privacy concerns indicating that the technology could be used by authoritarian governments to spy on citizens.

Instead of these worries, Apple backed off and announced on September 3: “Based on feedback from customers, advocacy groups, researchers and others, we have decided to take more time in the process. over the next few months to gather feedback and make improvements before releasing these critically important items. child safety features.

Searls concludes that Apple “presents itself as intransigent when in fact it compromises a lot”.

Source link

About Tammy Diaz

Check Also

Intruder removed after security breach at castle I’m A Celebrity

An intruder has been removed from the set of I’m a Celebrity… Get Me Out …