Google just announced a hefty move in its ad war against Facebook
Over the next few weeks, Google is bringing cross-device measurement to its DoubleClick advertising platform for the first time.
Over the next few weeks, Google is bringing cross-device measurement to its DoubleClick advertising platform for the first time.
Apple CEO Tim Cook has made no secret of his disdain for online services that ask you to trade highly personal data for convenience — a trade that describes most big advertising-supported technology companies. But last night, in some of his strongest comments to date, Cook said the erosion of privacy represents a threat to the American way of life. Cook spoke at a dinner in Washington, DC, hosted by the Electronic Privacy Information Center, which honored him as a “champion of freedom” for his leadership at Apple.
“Our privacy is being attacked on multiple fronts,” Cook said in a speech that he delivered remotely, according to EPIC. “I’m speaking to you from Silicon Valley, where some of the most prominent and successful companies have built their businesses by lulling their customers into complacency about their personal information. They’re gobbling up everything they can learn about you and trying to monetize it. We think that’s wrong. And it’s not the kind of company that Apple wants to be.”
Apple chief Tim Cook has made a thinly veiled attack on Facebook and Google for “gobbling up” users’ personal data.
In a speech, he said people should not have to “make trade-offs between privacy and security”.
While not naming Facebook and Google explicitly, he attacked companies that “built their businesses by lulling their customers into complacency”.
Rights activists Privacy International told the BBC it had some scepticism about Mr Cook’s comments.
“It is encouraging to see Apple making the claim that they collect less information on us than their competitors,” Privacy International’s technologist Dr Richard Tynan said.
“However, we have yet to see verifiable evidence of the implementation of these claims with regard to their hardware, firmware, software or online services.
“It is crucial that our devices do not betray us.”
Facebook. Instagram. Google. Twitter. All services we rely on — and all services we believe we don’t have to pay for. Not with cash, anyway. But ad-financed Internet platforms aren’t free, and the price they extract in terms of privacy and control is getting only costlier.
A recent Pew Research Center poll shows that 93 percent of the public believes that “being in control of who can get information about them is important,” and yet the amount of information we generate online has exploded and we seldom know where it all goes.
At an 18th-century mansion in England’s countryside last week, current and former spy chiefs from seven countries faced off with representatives from tech giants Apple and Google to discuss government surveillance in the aftermath of Edward Snowden’s leaks.
The three-day conference, which took place behind closed doors and under strict rules about confidentiality, was aimed at debating the line between privacy and security.
You want to know the habits of mobile phone users? Big Data. You want to reach a targeted clientele on the Web? Big Data. You want to decode the secrets of the latest on Netflix, or learn where to fix potholes in a neighbourhood? Big Data! All you need is a good algorithm, and a decent quantity of data, and the companies that analyze Big Data promise to find all sorts of answers to our questions. But who’s asking these questions? And can we trust algorithms to make decisions?
2015 is the year of Big Data. The concept of Big Data has existed for forty years already, but according to Forbes, this is the year that marks Big Data’s entry into the business world and governance. A bunch of companies are retuning their business models to reap profits from a new source of wealth: our personal data.
Big Data Mashup
Statistical analysis has always been with us. By taking surveys, or by calculating selected answers in a census form, we can estimate, more or less, the probability that a candidate will be elected, the number of car accidents annually, or even the type of individual most likely to reimburse a loan. Mistakes can be made, but numbers help uncover trends. And based on those trends, we hopefully make the right decisions.
Nowadays, we produce trends using quintillions of data points. Add this to the information collected by institutions and credit companies, browsing history tracked by cookies (episode 2), the data from our mobile phones (episode 4), 50 million photos, 40 million Tweets, and billions of documents exchanged daily. Now add the data produced by sports bracelets, “smart” objects and gadgets, and you’ll understand why “Big” is the right adjective to describe the vast expanse of available information.
However, the true revolutionary aspect of Big Data isn’t so much a question of its size, as it is the way in which all of this data can be mixed. Beyond the things it says about us (or despite us), it is the correlation and mixing of personal information that allow the behaviours of users to be predicted.
Being able to know what you say online? Who cares! But knowing the words used, and with whom you are exchanging, on what network, and at what time? Now that’s a moneymaker.
Categorization For The Win
With something as simple as a postal code, for example, average consumer income can be predicted. The Esri and Claritas agencies even claim to be able to deduce education level, lifestyle, family composition, and consumer habits from this one piece of information. Target made headlines in 2012 when it predicted a teenager’s pregnancy, before her parents were aware, based on the type of lotions, vitamins, and color of items purchased.
For these algorithms to work properly, individuals have to be put into increasingly more precise categories. And that is where discrimination lurks. Because we don’t always fit easily into a pigeonhole.
Predictions and Discrimination
As Kate Crawford stated when she was interviewed in episode 5, it is minorities, and those who are already discriminated against, that are the most affected by prediction errors. The more an individual corresponds to the “norm”, or to a predetermined category, the easier it is to take their data into consideration. But what happens when we are on the margins? What happens to those that don’t behave the way Amazon, Google, or Facebook predicts?
Facebook recently angered many of its users by strictly enforcing a section of their Terms and Conditions which insists people use their real names within the service. The purpose, says the company, was to provide a safer environment and limit hateful posts. What they didn’t account for was the deletion of accounts from the transgendered, indigenous and survivors of domestic violence whose accounts weren’t held under “real” names. This violated not only the individual rights but also the privacy of these users.
And what about prejudices and discrimination that algorithms only serve to reinforce? In 2014, Chicago police rang the doorbell of a 22 year old young man named Robert McDaniels. “We’re watching you” said one of the officers. This was the result of an algorithm developed by the Illinois Institute of Technology placing him on a list of 400 potential criminals because of crime data about his neighbourhood, the intersections where crimes occurred in the past, and his degrees of separation from people involved in crimes. It’s like science fiction. And if there was a misconception, how would it be repaired?
Take the Test
We’re not going to lie to you: it’s difficult, if not impossible, to find out how we are categorized – and even harder to avoid it altogether. It all depends on the company, the algorithm, and the information that they are after. However, some tools can give us a glimpse into the ways in which the Web categorizes us:
Sandra Rodriguez
Canada and its spying partners exploited weaknesses in one of the world’s most popular mobile browsers and planned to hack into smartphones via links to Google and Samsung app stores, a top secret document obtained by CBC News shows.
The 2012 document shows that the surveillance agencies exploited the weaknesses in certain mobile apps in pursuit of their national security interests, but it appears they didn’t alert the companies or the public to these weaknesses. That potentially put millions of users in danger of their data being accessed by other governments’ agencies, hackers or criminals.
The National Security Agency and its closest allies planned to hijack data links to Google and Samsung app stores to infect smartphones with spyware, a top-secret document reveals.
The surveillance project was launched by a joint electronic eavesdropping unit called the Network Tradecraft Advancement Team, which includes spies from each of the countries in the “Five Eyes” alliance — the United States, Canada, the United Kingdom, New Zealand and Australia.
The top-secret document, obtained from NSA whistleblower Edward Snowden, was published Wednesday by CBC News in collaboration with The Intercept. The document outlines a series of tactics that the NSA and its counterparts in the Five Eyes were working on during workshops held in Australia and Canada between November 2011 and February 2012.
The online giant probably knows more about you than the NSA — including things you might not even tell your mother.
The first law of selling is to know your customer. This simple maxim has made Google into the world’s largest purveyor of advertisements, bringing in more ad revenue this year than all the world’s newspapers combined. What makes Google so valuable to advertisers is that it knows more about their customers — that is to say, about you — than anyone else.
In a landmark ruling, the Court of Appeal of England and Wales has dismissed Google’s wish to prevent British web users from being able to sue the firm over tracking cookies and privacy violations.
Last Comments