close
  • Ar chevron_right

    Facebook suspends tens of thousands of apps in ongoing privacy investigation

    news.movim.eu / ArsTechnica – Yesterday - 02:20

Facebook suspends tens of thousands of apps in ongoing privacy investigation

Enlarge (credit: Getty Images )

Facebook—the social media company that has been under intense public criticism for not adequately safeguarding the personal information of its 2 billion users—has suspended tens of thousands of apps for a variety of violations, including improperly sharing private data.

In a post published on Friday , Facebook VP of Product Partnerships Ime Archibong said the move was part of an ongoing review that began in March 2018, following revelations that, two years earlier, Cambridge Analytica used the personal information of as many as 87 million Facebook users to build voter profiles for President Donald Trump’s presidential campaign. Facebook has been embroiled in several other privacy controversies since then.

The tens of thousands of apps were associated with about 400 developers. While some of the apps were suspended, in a few cases others were banned completely. Offenses that led to banning included inappropriately sharing data obtained from the Facebook platform, making data available without protecting user’s identities, or clear violations of the social network’s terms of service.

Read 6 remaining paragraphs | Comments

index?i=TmS0EfX6boA:-wuL34EBe4A:V_sGLiPBpWUindex?i=TmS0EfX6boA:-wuL34EBe4A:F7zBnMyn0Loindex?d=qj6IDK7rITsindex?d=yIl2AUoC8zA
  • Ar chevron_right

    AT&T tells court: Customers can’t sue over sale of phone location data

    news.movim.eu / ArsTechnica – 2 days ago - 19:08

The AT&T logo displayed on a smartphone screen.

Enlarge (credit: Getty Images | SOPA Images)

AT&T is trying to force customers into arbitration in order to avoid a class-action complaint over the telecom's former practice of selling users' real-time location data.

In a motion to compel arbitration filed last week, AT&T said that plaintiffs agreed to arbitrate disputes with AT&T when they entered into wireless service contracts. The plaintiffs, who are represented by Electronic Frontier Foundation (EFF) attorneys, will likely argue that the arbitration clause is invalid.

The case is pending in US District Court for the Northern District of California. In March 2018, a judge in the same court ruled that AT&T could not use its arbitration clause to avoid a class-action lawsuit over the company's throttling of unlimited mobile data plans. That's because the California Supreme Court had ruled in McGill v. Citibank "that an arbitration agreement that waives the right to seek the statutory remedy of public injunctive relief in any forum is contrary to California public policy and therefore unenforceable," the District Court judge wrote at the time.

Read 10 remaining paragraphs | Comments

index?i=Ic6rv0fliu4:R0hz9CG3Uas:V_sGLiPBpWUindex?i=Ic6rv0fliu4:R0hz9CG3Uas:F7zBnMyn0Loindex?d=qj6IDK7rITsindex?d=yIl2AUoC8zA
  • Te chevron_right

    Facebook has suspended ‘tens of thousands’ of apps suspected of hoarding data

    news.movim.eu / TechCrunch – 2 days ago - 18:11

Facebook has suspended “tens of thousands” of apps connected to its platform which it suspects may be collecting large amounts of user profile data.

That’s a sharp rise from the 400 apps flagged a year ago by the company’s investigation in the wake of Cambridge Analytica , a scandal that saw tens of millions of Facebook profiles scraped to help swing undecided voters in favor of the Trump campaign during the U.S. presidential election in 2016.

Facebook did not provide a more specific number in its blog post but said the apps were built by 400 developers.

Many of the apps had been banned for a number of reasons, like siphoning off Facebook user profile data or making data public without protecting their identities, or other violations of the company’s policies.

Despite the bans, the social media giant said it has “not confirmed” other instances of misusing user data beyond those it has already notified the public about. Among those previously disclose include South Korean analytics firm Rankwave, accused of abusing the developer platform and refusing an audit; and myPersonality, a personality quiz that collected data on more than four million users.

The action comes in the wake of the since-defunct Cambridge Analytica and other serious privacy and security breaches . Federal authorities and lawmakers have launched investigations and issued fines from everything from its Libra cryptocurrency project to how the company handles users’ privacy.

Facebook said its investigation will continue.

  • Te chevron_right

    Tonic launches a personalized news reader that respects user privacy

    news.movim.eu / TechCrunch – 2 days ago - 16:10

Personalization technology can lead to better experiences as it allows apps to customize their content for each individual user. But it can also chip away at user privacy. A company called Canopy wants to change that. It has developed a personalization engine that works without requiring users to log in or even provide an email. Instead, it uses a combination of on-device machine learning and differential privacy to offer a personalized experience to an app’s users. Now it’s demonstrating how this works with the launch of the news reader app, Tonic .

The new app is designed to be completely private, while also learning what you like over time, in order to offer a customized experience. But unlike other personalization engines, all the raw interaction and behavioral data stays on your own device. That means the company itself never see it, nor does any content provider or partner it works with, it says.

As Canopy explains :

What we instead send over an encrypted connection to our server is a differentially private version of your personal interaction and behavior model. The local model of you that goes to Canopy never has a direct connection to the things you’ve interacted with, but instead represents an aggregate set of preferences of people like you. It’s a crucial difference for our approach: even in the worst case of the encryption failing, or our servers being hacked, no one could ever do anything with the private models because they do not represent any individual.

Another big differentiator is that Tonic puts you in control over your own personalization settings. This is not typical. If you’ve ever used an app powered by personalization technology, there’s probably been a point where you were recommended a song, video, or a news article, for example, that seemed to be entirely wrong and not representative of something you’d actually like. But you may have been at a loss as to why it was recommended, because most apps don’t detail this sort of information.

Tonic, on the other hand, lets you view, change and even reset your personalization settings whenever you want.

tonic app phones

While Tonic is mainly meant to demonstrate of its engine in action — Canopy’s larger goal is to license the technology — the app itself has several other features that make it worth a look.

The company employs a human editorial team to help select the app’s news content, to ensure that it’s not offering a bunch of noise, like clickbait or “hate-reads.” It also avoids breaking news and “hot takes,” it says, as it’s not designed to be an app you use to track the latest news with urgency.

Instead, Tonic pulls from a diversity of sources with its core focus on bringing you a curated, personalized selection of daily reads to inform and inspire. And in the spirit of digital well-being, it’s a finite list of articles — not an endless news feed.

“We made Tonic because we were tired of having to give up our digital selves to get great recommendations, and because we wanted to build an alternative to endless feeds optimized for maximum engagement, breaking news, and outrage,” the company explains in its announcement of the app’s launch.

The technology’s arrival comes at a time when big tech is being investigated for carelessness with user data, and there’s increased attention on user privacy in general. Apple, for example, has made its respect for user privacy a key selling point for its hardware and software.

The New York-based startup was founded by Brian Whitman, formerly the founder of The Echo Nest and a former principal scientist at Spotify. The team also includes several ex-Spotify, Instagram, Google and New York Times execs. It’s seed-funded by Matrix Partners, and other investors from Spotify, WeWork, Splice, MIT Media Lab, Keybase, and more to the tune of $4.5 million dollars .

  • Te chevron_right

    Thinkful confirms data breach days after Chegg’s $80M acquisition

    news.movim.eu / TechCrunch – 3 days ago - 19:08

Thinkful, an online education site for developers, has confirmed a data breach, just days after it confirmed it would be acquired .

“We recently discovered that an unauthorized party may have gained access to certain Thinkful company credentials so, out of an abundance of caution, we are notifying all of our users,” said Erin Rosenblatt, the company’s vice-president of operations, in an email to users.

“As soon as we discovered this unauthorized access, we promptly changed the credentials, took additional steps to enhance the security measures we have in place, and initiated a full investigation,” the executive said.

At the time of writing, there has been no public acknowledgement of the breach beyond the email to users.

Thinkful, based in Brooklyn, New York, provides education and training for developers and programmers. The company claims the vast majority of its graduates get jobs in their field of study within a half-year of finishing their program. Earlier this month , education tech giant Chegg bought Thinkful for $80 million in cash.

But the company would not say when the breach happened — or if Chegg knew of the data breach prior to the acquisition announcement.

A spokesperson for Chegg did not respond to a request for comment. Thinkful spokesperson Catherine Zuppe did not respond to several emails of questions about the breach.

The email to users said the stolen credentials could not have granted the hacker access to certain information, such as government-issued IDs and Social Security numbers, or financial information. But although the company said it’s seen “no evidence” of any unauthorized access to user’s account data, it did not rule out any improper access to user data.

Thinkful said it is requiring all users to change their passwords.

We also asked Thinkful what security measures it has employed since the credentials breach, such as employing two-factor authentication, but did not hear back.

Just months earlier, Chegg confirmed a data breach , which forced the online technology giant to reset the passwords of its 40 million users.

At least Thinkful is now in good company.

  • Te chevron_right

    Silicon Valley is terrified of California’s privacy law. Good.

    news.movim.eu / TechCrunch – 3 days ago - 16:00

Silicon Valley is terrified.

In a little over three months, California will see the widest-sweeping state-wide changes to its privacy law in years. California’s Consumer Privacy Act (CCPA) kicks in on January 1 and rolls out sweeping new privacy benefits to the state’s 40 million residents — and every tech company in Silicon Valley.

California’s law is similar to Europe’s GDPR . It grants state consumers a right to know what information companies have on them, a right to have that information deleted and the right to opt-out of the sale of that information.

For California residents, these are extremely powerful provisions that allow consumers access to their own information from companies that collect an increasingly alarming amount of data on their users. Look no further than Cambridge Analytica, which saw Facebook profile page data weaponized and used against millions to try to sway an election. And given some of the heavy fines levied in recent months under GDPR, tech companies will have to brace for more fines when the enforcement provision kicks in six months later.

No wonder the law has Silicon Valley shaking in its boots. It absolutely should.

It’s no surprise that some of the largest tech companies in the U.S. — most of which are located in California — lobbied to weaken the CCPA’s provisions. These companies don’t want to be on the hook for having to deal with what they see as burdensome requests enshrined in the state’s new law any more than they currently are for Europeans with GDPR.

Despite the extensive lobbying, California’s legislature passed the bill with minor amendments, much to the chagrin of tech companies in the state.

“Don’t let this post-Cambridge Analytica ‘mea culpa’ fool you into believing these companies have consumers’ best interests in mind,” wrote the ACLU’s Neema Singh Guliani last year , shortly after the bill was signed into law. “This seeming willingness to subject themselves to federal regulation is, in fact, an effort to enlist the Trump administration and Congress in companies’ efforts to weaken state-level consumer privacy protections,” she wrote.

Since the law passed, tech giants have pulled out their last card: pushing for an overarching federal bill.

In doing so, the companies would be able to control their messaging through their extensive lobbying efforts , allowing them to push for a weaker statute that would nullify some of the provisions in California’s new privacy law. In doing so, companies wouldn’t have to spend a ton on more resources to ensure their compliance with a variety of statutes in multiple states.

Just this month, a group of 51 chief executives — including Amazon’s Jeff Bezos, IBM’s Ginni Rometty and SAP’s Bill McDermott — signed an open letter to senior lawmakers asking for a federal privacy bill, arguing that consumers aren’t clever enough to “understand rules that may change depending upon the state in which they reside.”

Then, the Internet Association, which counts Dropbox, Facebook, Reddit, Snap, Uber (and just today ZipRecruiter) as members, also pushed for a federal privacy law . “The time to act is now,” said the industry group. If the group gets its wish before the end of the year, the California privacy law could be sunk before it kicks in.

And TechNet, a “national, bipartisan network of technology CEOs and senior executives,” also demanded a federal privacy law , claiming — and without providing evidence — that any privacy law should ensure “businesses can comply with the law while continuing to innovate.” Its members include major venture capital firms, including Kleiner Perkins and JC2 Ventures, as well as other big tech giants like Apple, Google, Microsoft, Oracle and Verizon (which owns TechCrunch).

You know there’s something fishy going on when tech giants and telcos team up. But it’s not fooling anyone.

“It’s no accident that the tech industry launched this campaign right after the California legislature rejected their attempts to undermine the California Consumer Privacy Act,” Jacob Snow, a technology and civil liberties attorney at the ACLU of Northern California, told TechCrunch.

“Instead of pushing for federal legislation that wipes away state privacy law, technology companies should ensure that Californians can fully exercise their privacy rights under the CCPA on January 1, 2020, as the law requires,” he said.

There’s little lawmakers in Congress can do in three months before the CCPA deadline, but it won’t stop tech giants from trying.

Californians might not have the CCPA for long if Silicon Valley tech giants and their lobbyists get their way, but rest easy knowing the consumer won — for once.

  • Te chevron_right

    Google completes controversial takeover of DeepMind Health

    news.movim.eu / TechCrunch – 3 days ago - 12:46

Google has completed a controversial take-over of the health division of its UK AI acquisition , DeepMind.

The personnel move had been delayed as National Health Service (NHS) trusts considered whether to shift their existing DeepMind contracts — some for a clinical task management app, others involving predictive health AI research — to Google.

In a blog post yesterday Dr Dominic King, formerly of DeepMind (and the NHS), now UK site lead at Google Health, confirmed the transfer, writing: “It’s clear that a transition like this takes time. Health data is sensitive, and we gave proper time and care to make sure that we had the full consent and cooperation of our partners. This included giving them the time to ask questions and fully understand our plans and to choose whether to continue our partnerships. As has always been the case, our partners are in full control of all patient data and we will only use patient data to help improve care, under their oversight and instructions.”

The Royal Free NHS Trust , Taunton & Somerset NHS Foundation Trust , Imperial College Healthcare NHS Trust , Moorfields Eye Hospital NHS Foundation Trust and University College London Hospitals NHS Foundation Trust all put out statements yesterday confirming they have moved their contractual arrangements to Google.

In the case of the Royal Free, patients’ Streams data is moving to the Google Cloud Platform infrastructure to support expanding use of the app which surfaces alerts for a kidney condition to another of its hospitals (Barnet Hospital).

One NHS trust, Yeovil District Hospital NHS Foundation Trust, has not signed a new contract — and says it had never deployed Streams, suggesting it had not found a satisfactory way to integrate the app with its existing ways of working — instead taking the decision to terminate the arrangement. Though it’s leaving the door open to future health service provision from Google.

A spokeswoman for Yeovil hospital sent us this statement:

We began our relationship with DeepMind in 2017 and since then have been determining what part the Streams application could play in clinical decision making here at Yeovil Hospital.

The app was never operationalised, and no patient data was processed.

What’s key for us as a hospital, when it comes to considering the implementation of any new piece of technology, is whether it improves the effectiveness and safety of patient care and how it tessellates with existing ways of working. Working with the DeepMind team, we found that Streams is not necessary for our organisation at the current time.

Whilst our contractual relationship has ended, we will remain an anchor partner of Google Health so will continue to be part of conversations about emerging technology which may be of benefit to our patients and our clinician in the future.

The hand-off of DeepMind Health to Google, which was announced just over a year ago , means the tech giant is now directly providing software services to a number of NHS trusts that had signed contracts with DeepMind for Streams; as well as taking over several AI research partnerships that involve the use of NHS patients’ data to try to develop predictive diagnostic models using AI technology.

DeepMind — which kicked off its health efforts by signing an agreement with the Royal Free NHS Trust in 2015, going on to publicly announce the health division in spring 2016 — said last year its future focus would be as a “research organisation”.

As recently as this July DeepMind was also touting a predictive healthcare research “breakthrough” — announcing it had trained a deep learning model for continuously predicting the future likelihood of a patient developing a life-threatening condition called acute kidney injury. (Though the AI is trained on heavily gender-skewed data from the US department of Veteran Affairs.)

Yet it’s now become clear that it’s handed off several of its key NHS research partnerships to Google Health as part of the Streams transfer.

In its statement about the move yesterday, UCLH writes that “it was proposed” that its DeepMind research partnership — which is related to radiotherapy treatment for patients with head and neck cancer — be transferred to Google Health, saying this will enable it to “make use of Google’s scale and experience to deliver potential breakthroughs to patients more rapidly”.

“We will retain control over the anonymised data and remain responsible for deciding how it is used,” it adds. “The anonymised data is encrypted and only accessible to a limited number of researchers who are working on this project with UCLH’s permission. Access to the data will only be granted for officially approved research purposes and will be automatically audited and logged.”

It’s worth pointing out that the notion of “anonymised” high dimension health data should be treated with a healthy degree of scepticism — given the risk of re-identification.

Moorfields also identifies Google’s “resources” as the incentive for agreeing for its eye-scan related research partnership to be handed off, writing: “This updated partnership will allow us to draw on Google’s resources and expertise to extend the benefits of innovations that AI offers to more of our clinicians and patients.”

Quite where this leaves DeepMind’s ambitions to “lead the way in fundamental research applying AI to important science and medical research questions, in collaboration with academic partners, to accelerate scientific progress for the benefit of everyone”, as it put it last year — when it characterized the hand-off to Google Health as all about ‘scaling Streams’ — remains to be seen.

We’ve reached out to DeepMind for comment on that.

Co-founder Mustafa Suleyman, who’s been taking a leave of absence from the company, tweeted yesterday to congratulate the Google Health team.

DeepMind’s NHS research contracts also transferring to Google Health suggests the tech giants wants zero separation between core AI health research and the means of application, using its own cloud infrastructure, of any promising models it’s able to train off of patient data and commercialize by selling to the same healthcare services providers as apps and services.

You could say Google is seeking to bundle access to the high resolution patient data that’s essential for developing health AIs with the provision of commercial digital healthcare services it hopes to sell hospitals down the line, all funnelled through the same Google cloud infrastructure.

As we reported at the time, the hand-off of DeepMind Health to Google is controversial .

Firstly because the trust that partnered with DeepMind in 2015 to develop Streams was later found by the UK’s data protection watchdog to have breached UK law. The ICO said there was no legal basis for the Royal Free to have shared the medical records of ~1.6M patients with DeepMind during the app’s development.

Despite concerns being raised over the legal basis for sharing patients’ data throughout 2016 and 2017 DeepMind continued inking NHS contracts for Streams — claiming at the time that patient data would never be handed to Google. Yet fast forward a couple of years and it’s now literally sitting on the tech giant’s servers.

It’s that U-turn that led the DeepMind to Google Health hand-off to be branded a trust demolition by legal experts when the news was announced last year.

This summer the UK’s patient data watchdog, the National Data Guardian, released correspondence between her office and the ICO which informed the latter’s 2017 finding that Streams had breached data protection law — in which she articulates a clear regulatory position that the “reasonable expectations” of patients must govern non-direct care uses for people’s health data, rather than healthcare providers relying on doctors to decide whether they think the intended purpose for people’s medical information is justified.

The Google Health blog post talks a lot about “patient care” and “patient data” but has nothing to say about patients’ expectations of how their personal information should be used, with King writing that “our partners are in full control of all patient data and we will only use patient data to help improve care, under their oversight and instructions”.

It was exactly such an ethical blindspot around the patient’s perspective that led Royal Free doctors to override considerations about people’s medical privacy in the rush to throw their lot in with Google-DeepMind and scramble for AI-fuelled predictive healthcare .

Patient consent was not sought for passing medical records then; nor have patients’ views been consulted in the transfer of Streams contracts (and people’s data) to Google now.

And while — after it was faced with public outcry over the NHS data it was processing — DeepMind did go on to publish its contracts with NHS trusts (with some redactions), Google Health is not offering any such transparency on the replacement contracts that have been inked now. So it’s not clear whether there have been any other changes to the terms. Patients have to take all that on trust.

We reached out to the Royal Free Trust with questions about the new contract with Google but a spokeswoman just pointed us to the statement on its website — where it writes: “All migration and implementation will be completed to the highest standards of security and will be compliant with relevant data protection legislation and NHS information governance requirements.”

“As with all of our arrangements with third parties, the Royal Free London remains the data controller in relation to all personal data. This means we retain control over that personal data at all times and are responsible for deciding how that data is used for the benefit of patient care,” it adds.

In another reduction in transparency accompanying this hand-off from DeepMind to Google Health, an independent panel of reviewers that DeepMind appointed to oversee its work with the NHS in another bid to boost trust has been disbanded.

“As we announced in November, that review structure — which worked for a UK entity primarily focused on finding and developing healthcare solutions with and for the NHS — is not the right structure for a global effort set to work across continents as well as different health services,” King confirmed yesterday.

In its annual report last year the panel had warned of the risk of DeepMind exerting “excessive monopoly power” as a result of the data access and streaming infrastructure bundled with provision of the Streams app. For DeepMind then read Google now.

Independent experts raising concerns about monopoly power unsurprisingly doesn’t align with Google’s global ambitions in future healthcare provision.

The last word from the independent reviewers is a Medium post penned by former chair, professor Donal O’Donoghue — who writes that he’s “disappointed that the IR experiment did not have the time to run its course and I am sad to say goodbye to a project I’ve found fascinating”.

“This was a fascinating exploration into how a new governance model could be applied to such an important area such as health,” he adds. “It’s hard to know how this would have developed over the years but… what is clear to me is that trust and transparency are of paramount importance in healthcare and I’m keen to see how Google Health, and other providers, deliver this in the future.”

But with trust demolished and transparency reduced Google Health appears to have learnt exactly nothing from DeepMind’s missteps.

  • Te chevron_right

    Private search engine Qwant’s new CEO is Mozilla Europe veteran Tristan Nitot

    news.movim.eu / TechCrunch – 3 days ago - 06:00

French startup Qwant , whose non-tracking search engine has been gaining traction in its home market as a privacy-respecting alternative to Google, has made a change to its senior leadership team as it gears up for the next phase of growth.

Former Mozilla Europe president, Tristan Nitot, who joined Qwant last year as VP of advocacy, has been promoted to chief executive, taking over from François Messager — who also joined in 2018 but is now leaving the business. Qwant co-founder, Eric Leandri, meanwhile, continues in the same role as president.

Nitot, an Internet veteran who worked at Netscape and helped to found Mozilla Europe in 1998, where he later served as president and stayed until 2015 before leaving to write a book on surveillance, brings a wealth of experience in product and comms roles, as well as open source.

Most recently he spent several years working for personal cloud startup, Cozy Cloud .

“I’m basically here to help [Leandri] grow the company and structure the company,” Nitot tells TechCrunch, describing Qwant’s founder as an “amazing entrepreneur, audacious and visionary”.

Market headwinds have been improving for the privacy-focused Google rival in recent years as concern about foreign data-mining tech giants has stepped up in Europe.

Last year the French government announced it would be switching its search default from Google to Qwant. Buying homegrown digital tech now apparently seen as a savvy product choice as well as good politics.

Meanwhile antitrust attention on dominant search giant Google, both at home and abroad , has led to policy shifts that directly benefit search rivals — such as an update of the default lists baked into its chromium engine which was quietly put out earlier this year .

That behind the scenes change saw Qwant added as an option for users in the French market for the first time. (On hearing the news a sardonic Leandri thanked Google — but suggested Qwant users choose Firefox or the Brave browser for a less creepy web browsing experience.)

“A lot of companies and institutions have decided and have realized basically that they’ve been using a search engine which is not European. Which collects data. Massively. And that makes them uncomfortable,” says Nitot. “They haven’t made a conscious decision about that. Because they bring in a computer which has a browser which has a search engine in it set by default — and in the end you just don’t get to choose which search engine your people use, right.

“And so they’re making a conscious decision to switch to Qwant. And we’ve been spending a lot of time and energy on that — and it’s paying off big time.”

As well as the French administration’s circa 3M desktops being switched by default to Qwant (which it expects will be done this quarter), the pro-privacy search engine has been getting traction from other government departments and regional government, as well as large banks and schools, according to Nitot.

He credits a focus on search products for schoolkids with generating momentum, such as Qwant Junior, which is designed for kids aged 6-12, and excludes sex and violence from search results as well as being ad free. (It’s set to get an update in the next few weeks.) It has also just been supplemented by Qwant School: A school search product aimed at 13-17 year olds.

“All of that creates more users — the kids talk to their parents about Qwant Junior, and the parents install Qwant.com for them. So there’s a lot of momentum creating that growth,” Nitot suggests.

Qwant says it handled more than 18 billion search requests in 2018.

A growing business needs money to fuel it of course. So fundraising efforts involving convertible bonds is one area Nitot says he’ll be focused on in the new role. “We are raising money,” he confirms.

Increasing efficiency — especially on the engineering front — is another key focus for the new CEO.

“The rest will be a focus on the organization, per se, how we structure the organization. How we evolve the company culture. To enable or to improve delivery of the engineering team, for example,” he says. “It’s not that it’s bad it’s just that we need to make sure every dollar or every euro we invest gives as much as possible in return.”

Product wise, Nitot’s attention in the near term will be directed towards shipping a new version of Qwant’s search engine that will involve reengineering core tech to improve the quality of results.

“What we want to do [with v2] is to improve the quality of the results,” he says of the core search product. “You won’t be able to notice any difference, in terms of quality, with the other really good search engines that you may use — except that you know that your privacy is respected by Qwant.

“[As we raise more funding] we will be able to have a lot more infrastructure to run better and more powerful algorithms. And so we plan to improve that internationally… Every language will benefit from the new search engine. It’s also a matter of money and infrastructure to make this work on a web scale. Because the web is huge and it’s growing.

“The new version includes NLP (Natural Language Processing) technology… for understanding language, for understanding intentions — for example do you want to buy something or are you looking for a reference… or a place or a thing. That’s the kind of thing we’re putting in place but it’s going to improve a lot for every language involved.”

Western Europe will be the focus for v2 of the search engine, starting with French, German, Italian, Spanish and English — with a plan to “go beyond that later on”.

Nitot also says there will also be staggered rollouts (starting with France), with Qwant planning to run old and new versions in parallel to quality check the new version before finally switching users over.

“Shipping is hard as we used to say at Mozilla,” he remarks, refusing to be fixed to a launch date for v2 (beyond saying it’ll arrive in “less than a year”). “It’s a universal rule; shipping a new product is hard, and that’s what we want to do with version 2… I’ve been writing software since 1980 and so I know how predictions are when it comes to software release dates. So I’m very careful not to make promises.”

Developing more of its own advertising technologies is another focus for Qwant. On this front the aim is to improve margins by leaning less on partners like Microsoft .

“We’ve been working with partners until now, especially on the search engine result pages,” says Nitot. “We put Microsoft advertising on it. And our goal is to ramp up advertising technologies so that we rely on our own technologies — something that we control. And that hopefully will bring a better return.”

Like Google, Qwant monetizes searches by serving ads alongside results. But unlike Google these are contextual ads, meaning they are based on general location plus the substance of the search itself; rather than targeted ads which entail persistent tracking and profiling of Internet users in order to inform the choice of ad (hence feeling like ads are stalking you around the Internet).

Serving contextual ads is a choice that lets Qwant offer a credible privacy pledge that Mountain View simply can’t match.

Yet up until 2006 Google also served contextual ads, as Nitot points out, before its slide into privacy-hostile microtargeting. “It’s a good old idea,” he argues of contextual ads. “We’re using it. We think it really is a valuable idea.”

Qwant is also working on privacy-sensitive ad tech. One area of current work there is personalization. It’s developing a client-side, browser-based encrypted data store, called Masq, that’s intended to store and retrieve application data through a WebSocket connection. (Here’s the project Masq Github page .)

“Because we do not know the person that’s using the product it’s hard to make personalization of course. So we plan to do personalization of the product on the client side,” he explains. “Which means the server side will have no more details than we currently do, but on the client side we are producing something which is open source, which stores data locally on your device — whether that’s a laptop or smartphone — in the browser, it is encrypted so that nobody can reuse it unless you decide that you want that to happen.

“And it’s open source so that it’s transparent and can be audited and so that people can trust the technology because it runs on their own device, it stores on their device.”

“Right now it’s at alpha stage,” Nitot adds of Masq, declining to specify when exactly it might be ready for a wider launch.

The new CEO’s ultimate goal for Qwant is to become the search engine for Europe — a hugely ambitious target that remains far out of reach for now, with Google still commanding in excess of 90% regional marketshare. (A dominance that has got its business embroiled in antitrust hot water in Europe.)

Yet the Internet of today is not the same as the Internet of yesterday when Netscape was a browsing staple — until Internet Explorer knocked it off its perch after Microsoft bundled its rival upstart as the default browser on Windows. And the rest, as they say, is Internet history.

Much has changed and much is changing. But abuses of market power are an old story. And as regulators act against today’s self-interested defaults there are savvy alternatives like Qwant primed and waiting to offer consumers a different kind of value.

“Qwant is created in Europe for the European citizens with European values,” says Nitot. “Privacy being one of these values that are central to our mission. It is not random that the CNIL — the French data protection authority — was created in France in 1978. It was the first time that something like that was created. And then GDPR [General Data Protection Regulation] was created in Europe. It doesn’t happen by accident. It’s a matter of values and the way people see their life and things around them, politics and all that. We have a very deep concern about privacy in France. It’s written in the European declaration of human rights.

“We build a product that reflects those values — so it’s appealing to European users.”