US charges Chinese hackers with ‘unprecedented’ attacks on gaming companies – CNET

e3-2019-video-games-gaming-player-6152

Hackers targeted multiple video game companies to generate and sell virtual currency online, the Justice Department said.

James Martin/CNET

Video games are a billion-dollar business, and hackers are starting to take notice, the Justice Department warned Wednesday. The agency announced charges against five Chinese hackers and two Malaysian tech executives who it tied to a six-year campaign against multiple video game companies. 

The five from China -- Zhang Haoran, Tan Dailin, Qian Chuan, Fu Qiang and Jiang Lizhi -- are allegedly responsible for hacking more than 100 entities, including social networks, telecommunications providers, universities and nonprofit organizations. While these are common targets for nation-state hackers, the attacks on video game companies raise a new concern for the Justice Department. 

"We see this as unfortunately a new area in which hackers are exploiting, and it's a billion-dollar industry," Michael Sherwin, acting US attorney for Washington, DC, said at a press briefing. "There's a lot of coins, tokens, digital currency involved in a lot of these online games." 

Video games drive robust sales, reaching $1.2 billion in July. Fortnite, which is a free game, took in $2.4 billion in revenue from in-game purchases in 2018. For hackers, it's an industry ripe for profits through cyberattacks. 

"This is a new target-rich environment," Sherwin said, calling the scope and sophistication of these attacks "unprecedented." 

The hacking campaign began in June 2014 and ran until this August, Justice Department officials said. It affected video game companies based in the US, South Korea, Japan and Singapore. 

The group of Chinese hackers, known to the government as APT 41, allegedly gained access through multiple methods, including brute force attacks, spear-phishing and supply chain attacks. Brute force attacks are when hackers guess all the possible passwords until something works.

"APT41 has been involved in several high-profile supply chain incidents which often blended their criminal interest in video games with the espionage operations they were carrying out on behalf of the state," John Hultquist, senior director of analysis at cybersecurity company FireEye. "For instance, they compromised video game distributors to proliferate malware which could then be used for follow-up operations."

One video game company, based in California, was breached after the hackers sent an email pretending to be a former employee, with a malware-laced resume attached, according to the court documents.

hackers.png

The FBI's wanted poster for the five Chinese hackers.

FBI

Justice Department officials also noted that the supply chain attacks didn't just affect the video game companies, but reached multiple corporations around the world. The Chinese hackers would compromise software used by major companies and gain access through malicious backdoors they created, officials said. 

Once the hackers had access to a video game company, according to the Justice Department, they would modify its databases to generate certain items or virtual currency for themselves and then sell it through a marketplace called SEA Gamer Mall, a company based in Malaysia. 

Its CEO, Wong Ong Hua, and its chief product officer, Ling Yang Ching, are accused of working with the Chinese hackers to sell the virtual items on their platform. Malaysian police arrested the two on Monday, and the US government is seeking extradition. 

SEA Gamer Mall issued a statement Thursday and said that the company has "never engaged in any illegal activity."

Prosecutors said that Ling joined a Facebook group labeled as a black market for one of the hacked games, which he used to promote the sale of virtual items. 

It's unclear how profitable the effort was, but investigators found 3,779,440 in an unknown currency transferred to one hacker's bank account in 2014. 

In July 2017, the hackers started targeting games based in the US and Europe after finding low revenue on games based in Southeast Asia, according to court documents. 

While having access to the video game companies' internal network, the attackers were also able to stay a step ahead of their fraud detection. The hackers monitored their protection and frequently worked around it to continue their campaign, Justice Department officials said. 

The hackers had access to 25 million records of customers' names, addresses, password hashes, emails and other personal information.

According to court documents, the hackers also used their access to sabotage their competition in video game sales. 

Deputy Attorney General Jeffrey Rosen said the Justice Department worked with Google, Microsoft, Facebook, Verizon and other tech companies to stop the hacking campaign. That included shutting down fake pages designed to look like Google and Microsoft logins and taking down VPNs the hackers used to hide their tracks. 

"We have used every tool at the department's disposal to disrupt these APT 41 activities," Rosen said. 

Let's block ads! (Why?)

Read More

Hackers out of Russia, China, Iran are targeting US election, Microsoft finds – CNET

trump-pence-biden-harris-election-elections-voting-economy-food-0741

Hackers from Russia, China and Iran are targeting both parties in the 2020 US presidential election, researchers found. 

James Martin/CNET
This story is part of Elections 2020, CNET's coverage of the run-up to voting in November.

Hackers have never stopped trying to interfere in US politics, they've only gotten smarter about covering their tracks, researchers from Microsoft disclosed on Thursday. The attacks have only advanced since Russian hackers interfered with the US presidential election in 2016, with attempted hacks now targeting both the Trump and Biden campaigns. 

The presidential election in 2016 showed that cybersecurity plays a major role in politics, after Russian hackers stole and leaked thousands of emails from the Democratic National Committee and Hillary Clinton's campaign. Since then, government agencies like the Cybersecurity and Infrastructure Security Agency and the FBI have ramped up efforts to protect elections from hackers and online disinformation. 

In a press briefing in August, the agencies said they hadn't found any evidence of successful cyberattacks against election infrastructure, but they noted that there were many attempts on a daily basis. Microsoft's report on Thursday gives a glimpse into those attempts, which it says are coming from hacker groups in Russia, China and Iran.

"Protecting our elections is a team effort with the federal government and the private sector joining together to thwart foreign malign actors," the Department of Homeland Security's acting secretary, Chad Wolf, said in a statement Thursday. Wolf said Microsoft's announcement reaffirms his statements in the recent State of the Homeland Address that hackers from China, Iran and Russia "are trying to undermine our democracy and influence our elections."

Russian hackers have changed their tactics, and are targeting more than 200 organizations in the US, including consultants tied to Republicans and Democrats, Microsoft said.

Now playing: Watch this: CISA director: Paper record key to keeping 2020 election...

5:22

Though Russian hackers relied on spear phishing in 2016, where it sent tailored messages to trick victims into clicking on malicious links, in recent months it's been using brute force attacks, where it floods accounts with password guesses until one of them works. 

Russian hackers have been covering up their tracks by rotating through 1,000 different IP addresses, and adding about 20 new ones each day, Microsoft found. 

Chinese hackers launched thousands of attacks and successfully compromised about 150 people between March and September, Microsoft said. The nation-state's hackers are targeting people affiliated with presidential campaigns, and made an unsuccessful attempt against people related to the Joe Biden for President campaign, the company said. 

"We are aware of reports from Microsoft that a foreign actor has made unsuccessful attempts to access the non-campaign email accounts of individuals affiliated with the campaign," the Biden campaign said. "We have known from the beginning of our campaign that we would be subject to such attacks and we are prepared for them. Biden for President takes cybersecurity seriously, we will remain vigilant against these threats, and will ensure that the campaign's assets are secured."

Unlike the hacking efforts by Russians, the hackers in China are using known bugs on websites and targeting specific individuals for its attacks, Microsoft detailed. 

Iranian hackers have been trying to access accounts belonging to Trump's campaign staff, as well as accounts belonging to Trump administration officials, between May and June, according to the company. 

"As President Trump's re-election campaign, we are a large target, so it is not surprising to see malicious activity directed at the campaign or our staff," said the Trump campaign's deputy national press secretary, Thea McDonald. "We work closely with our partners, Microsoft and others, to mitigate these threats. We take cybersecurity very seriously and do not publicly comment on our efforts." 

Microsoft also caught Iranian hackers making more than 2,700 attempts to hack a presidential campaign last October, and Google found Iranian and Chinese hackers attempting to hack both presidential campaigns in June

A report from the Office of the Director of National Intelligence in August found that Russia was attempting to sabotage Biden's election bid while China was working against the Trump campaign. 

Microsoft's disclosure comes the same day the US Treasury Department announced sanctions against three Russians for ties to the country's disinformation effort and a Ukrainian Parliament member for efforts to interfere with the 2020 election. 

Let's block ads! (Why?)

Read More

A crime reporting app shifts to tracking COVID-19, raising privacy questions – CNET

citizen.png

Privacy advocates worry that Citizen's exposure notification shows the exact address where a person was in contact with someone who tested positive for COVID-19.

Citizen
For the most up-to-date news and information about the coronavirus pandemic, visit the WHO website.

Citizen, an app that lets you see unverified crime reports in your neighborhood, has often been used to advance false claims. One doozy: a tiger reportedly loose in Manhattan that turned out to be a raccoon. Now the company wants to help cities track cases of COVID-19

Los Angeles County on Wednesday said it's partnering with Citizen for its contact tracing app SafePass. The app, unveiled in August, works as a digital pass for logging your symptoms and location. It uses Bluetooth and GPS to track your interactions with other people using the app. 

If someone you've been in contact with later tests positive for COVID-19 and marks themselves on the app, the app notifies you about the exposure and provides details on when and where it happened.

The officials, including Mayor Eric Garcetti and public health director Dr. Barbara Ferrer, encouraged the area's 10 million residents to download the app. Advocates, however, have warned that SafePass' location-tracking features are a privacy risk.

The mayor's office didn't respond to a request for comment on privacy concerns with the app. 

"We have to deploy every tool at our disposal to halt the spread of COVID-19 –– from wearing masks to keeping our distance to avoiding large gatherings –– and contact tracing is an absolutely essential part of our effort to track this virus and save lives," Garcetti said in a statement. 

Public safety experts and lawmakers have criticized Citizen for stirring panic in communities, accusing the app of inundating people with crime alerts while overall crime rates are at historic lows. The company's shift to public health raises alarms that it could bring that practice into a global pandemic. 

Now playing: Watch this: Contact tracing explained: How apps can slow the coronavirus

6:07

"For an app that's pretty much designed to try to make you get a constant stream of dangerous situations to avoid, it's not hard to imagine they would attach a public health lens on top of the unverified crime reporting that they do," said Angel Diaz, liberty and national security counsel at the Brennan Center for Justice. 

Diaz said he first started examining Citizen's contact tracing app when the company was in talks to partner with New York City. He said he saw several red flags with the service, specifically with how it shows exposures and the amount of location data it takes.

The coronavirus pandemic has eroded many privacy protections, with companies using surveillance software to monitor social distancing while data brokers use location data to monitor people's movements

Contact tracing apps come with their own privacy concerns because they essentially require people to share their whereabouts at all times with an app. The apps work by notifying people if they've been around someone who tested positive for COVID-19, based on their location history. 

Contact tracing is considered an effective tool for limiting the spread of the contagious disease, but there are still privacy concerns about how the data collected in apps is protected. 

In July, lawmakers pushed for a privacy bill that would limit contact tracing data to health purposes only, blocking the data from being used by law enforcement agencies or for-profit companies.

Citizen CEO Andrew Frame said the company was committed to maintaining privacy in its contact tracing app, though there are a handful of concerns about what data SafePass collects, and how it can be used. 

"We created SafePass to help slow the spread [of the] virus and give people the tools they need to keep themselves and their communities safe through collective action, including sharing information," Frame said in a statement. 

Location tracking 

For Citizen's contact tracing to work, the app takes your device's location data through both its GPS and Bluetooth. You can turn off your GPS, but the company said the app won't function without location data. 

Citizen said its alerts don't share any personal information, but the notification shows the time, place and duration of an exposure to COVID-19. The company acknowledges in its privacy policy that this is enough information needed to figure out someone's identity. 

 "While this information does not identify you, there are circumstances when a user could identify you based on the location," the company said. "For example, this may occur if a user knows you personally and recalls that they met you at the location we specify on the map."

Other apps for tracking COVID-19 exposures don't require location data as a privacy protection. Google and Apple's exposure notification tools don't request GPS information, and only use encrypted Bluetooth signals to mark distance, for example. 

The structure lets you know you were exposed to someone who tested positive for COVID-19 and that you should be tested and quarantine. But it doesn't tell you where the exposure took place. 

Diaz says that's important for privacy and safety because knowing where a potential exposure happened is often enough to identify the person involved. 

"If you got exposed near a restaurant or a school, it really opens up harassment that could have been avoided," Diaz said. 

Citizen said it's valuable for people to know where the exposures happened, noting that it could help lead to better decisions for people potentially infected with COVID-19. 

"If a potentially infected user, say at a birthday party, sees that they may have been exposed at the party and remembers that they weren't wearing a mask or maintaining a social distance, he/she may decide to take different precautions than if they were," a Citizen spokesman said. "Knowing where you may have been exposed can also give you the information you need to alert your friends and family who were at the birthday party who aren't contact tracing."

The company said it deletes location data from Bluetooth and GPS, as well as photos of your ID used for verification purposes after 30 days.

The company's privacy policy also said that your location data could be shared with government agencies, without clarifying which agencies those could be. Diaz raised concerns that it could mean anything from public health officials to law enforcement agencies. 

Citizen said in a statement it can conduct COVID-19 symptom surveys on behalf of a state or city government agency and share results, but doesn't provide data to law enforcement agencies unless presented with a warrant or a subpoena. 

The company didn't answer why its privacy policy doesn't specify that. 

The privacy policy also said Citizen can share the data to protect against "fraudulent, harmful, unauthorized, unethical or illegal activity," which Diaz warns leaves the window open for sharing data with law enforcement agencies. 

"That gives them a lot of leeway in terms of what they decide is necessary to give over to the government," he said. 

Let's block ads! (Why?)

Read More

Portland, Oregon, passes toughest ban on facial recognition in US – CNET

gettyimages-1211558665

Under Portland's ordinance, private businesses will be banned from using facial recognition. 

Getty Images

The city council in Portland, Oregon, on Wednesday passed the strongest ban on facial recognition in the US, blocking use of the technology by private businesses as well as government agencies in the city. 

Portland's ban on facial recognition isn't the first, but it's the strictest. Cities like San Francisco, Boston and Oakland, California, have all passed legislation banning just government agencies from using facial recognition.  

The bill passed unanimously, and the ban will take effect in January 2021. 

It means that along with police officers being banned from using facial recognition to identify potential suspects, stores and businesses won't be able to use the technology either. An Oregonian report in February detailed how a Portland convenience store used facial recognition to allow entry and identify shoplifters. 

The ban will also extend to facial recognition at airports, where airlines like Delta use the technology for boarding.

"All Portlanders are entitled to a city government that will not use technology with demonstrated racial and gender biases that endanger personal privacy," Portland Mayor Ted Wheeler said at Wednesday's City Council meeting.

The commercial ban on facial recognition signals a potentially larger move to outlaw the technology beyond police use. While companies like Amazon and Microsoft have paused their facial recognition work with police because of ethical concerns, the technology is still being used by businesses, which can provide that data to law enforcement agencies. 

In July, the Electronic Frontier Foundation found that San Francisco police used a downtown business district's camera network to monitor protesters, blurring the line between public and private surveillance. 

Researchers have frequently found racial and gender bias issues with facial recognition algorithms, regardless of who's using the technology. Detroit's police department has admitted that its facial recognition misidentifies people "96% of the time," with the technology leading to wrongful arrests on multiple occasions. 

Amazon spent $24,000 lobbying against Portland's legislation, but declined to comment on its opposition to the facial recognition regulations. The company referred to its past remarks calling for facial recognition legislation at the federal level, where it's spent more than $14 million in lobbying.  

Private businesses often have lower thresholds for accuracy than government agencies do. Amazon recommends that law enforcement agencies use a 99% confidence threshold for its facial recognition algorithm, but not private businesses. 

Private businesses also don't have rules or standards to prevent the abuse of facial recognition. You could be banned from visiting a store and never know that it's because the facial recognition system misidentified you as a shoplifter.  

Bans on facial recognition have come city by city, while some federal lawmakers are looking to pass national legislation on the technology. Most of the legislation proposed in the last year has been focused on public use of facial recognition rather than on use by private businesses. 

In March 2019, two senators proposed the Commercial Facial Recognition Privacy Act, which would prevent companies from collecting facial recognition data on people without their consent. 

The decision of Portland's city council goes beyond limiting the technology by fully outlawing it.

"This is the first of its kind legislation in the nation, and I believe in the world," Wheeler said. "This is truly a historic day for the city of Portland."

The mayor added that he hopes this legislation will inspire other cities to impose tougher regulations against facial recognition. Privacy advocates who supported the ban in Portland agreed.

"Now, cities across the country must look to Portland and pass bans of their own," Lia Holland, an activist with the digital rights group Fight for the Future, said in a statement. "And, Congress should act to pass bans at the federal level. We have the momentum, and we have the will to beat back this dangerous and discriminatory technology."

Now playing: Watch this: Clearview AI's facial recognition goes creepier than...

2:58

The ordinance bans both government agencies and private businesses from using facial recognition in Portland, with exceptions for individual use like unlocking your own phone or using a face filter on a social media app. 

Companies that violate the ban are liable to lawsuits and may be required to pay $1,000 a day for each day of the violation, according to the legislation. 

"We are a pro-technology city, but what we've seen so far in practice with this technology, it continues to exacerbate the overcriminalization of Black and Brown people in our community," Commissioner Jo Ann Hardesty said. Hardesty said facial recognition won't be used in the city until problems are addressed and the fixes have been verified by independent sources.

Let's block ads! (Why?)

Read More

Online voting company pushes to make it harder for researchers to find security flaws – CNET

Voatz mobile-phone blockchain voting

In a Supreme Court briefing, Voatz argues that security researchers should need authorization to search for vulnerabilities.

West Virginia Secretary of State; screenshot by Stephen Shankland/CNET
This story is part of Elections 2020, CNET's coverage of the run-up to voting in November.

Cybersecurity experts and lawmakers have little faith in online voting, thanks to the high potential for hacks, as well as worries about vulnerabilities, either of which could affect an election's outcome. Security researchers often find flaws with online-voting systems, and now an e-voting company is pushing to make it more difficult to find vulnerabilities.

In a briefing filed to the Supreme Court on Thursday, Voatz, a Boston-based e-voting company, argues that security researchers shouldn't have legal protections when looking for flaws without permission.  

"Allowing for unauthorized research taking the form of hacks/attacks on live systems would lead to uncertain and often faulty results and conclusions, makes distinguishing between true researchers and malicious hackers difficult, and unnecessarily burdens the mandate of the nation's critical infrastructure," Voatz said in a statement to CNET.

Voatz has argued against security researchers who found issues with its mobile-voting software, which is used in 11 states. In February, Voatz disputed the findings of MIT researchers, who said the e-voting platform was riddled with security flaws

"By conducting their activities on an unauthorized basis rather than through Voatz authorized bug bounty program or direct collaboration with Voatz, the researchers rendered their own findings relatively useless," the company said in its briefing on Thursday.

Last October, Voatz also reported a University of Michigan election-security student to West Virginia officials, who turned the investigation over to the FBI. The student had been enrolled in a course that required looking at potential flaws on mobile-voting technology, which included Voatz, according to CNN. 

Now playing: Watch this: Trump's top cybersecurity official: Mail-in ballots are...

25:16

Security researchers always run the risk of violating the Computer Fraud and Abuse Act (CFAA), a law created in 1986 with a broad definition of what's considered hacking. The law considers any intentional access to a computer without authorization to be a federal crime. It's broad enough that sharing a Netflix password could be considered a CFAA violation. 

In April, the Supreme Court agreed to hear Van Buren v. United States, a case that centers on what can be considered a CFAA violation. Voatz filing was made as a friend of the court brief in that case.

Security researchers want the Supreme Court to consider their work protected from the CFAA.

"Almost by its nature, discovering security vulnerabilities requires accessing computers in a manner unanticipated by computer owners, frequently in contravention of the owners' stated policies," a July 8 briefing from a group of security researchers wrote.

Security researchers find and report vulnerabilities on critical infrastructure, including voting machines. The work is so vital that officials from the Department of Homeland Security invited hackers to continue finding flaws on election infrastructure. 

For years, voting machine vendors had been apprehensive about the process, raising concerns about hackers finding issues with their software without proper permission. In August, major election vendor ES&S started allowing for penetration testing on its machines

In its brief, Voatz made clear it didn't agree with that direction.

The company argues that the Supreme Court will create a loophole for malicious hackers to carry out attacks if it allows security researchers to test for vulnerabilities without authorization. 

"This would undoubtedly result in a significant increase in such unauthorized hacking," Voatz said in its briefing. 

Security researchers warn that if they're allowed to find and disclose flaws only with explicit permission from the companies involved, malicious hackers, who are undeterred by laws, will exploit this knowledge gap. 

"To elaborate, if there's a method of exploiting the system that the organization is unaware of, they cannot possibly provide legal access to test it," Bugcrowd founder Casey Ellis said in a statement. "Unauthorized access is one of the main purposes of security research -- by making it illegal, researchers will be unable to effectively do their jobs, the organization will not be able to close all vulnerabilities, and attackers will win."  

Jake Williams, founder of the security firm Rendition Security, pointed out that there's a difference between vulnerability disclosure and discovery. 

Though both security researchers and malicious hackers work without authorization, only security researchers are properly disclosing these flaws to the companies involved. Malicious hackers will discover vulnerabilities and often use them for financial gain, without ever informing the companies, he said.

 Voatz's argument on Thursday, he added, would adversely change that. 

"The vast majority of researchers, I'd say 90% plus, are not authorized," Williams said. "They are 100% trying to make it more difficult, there's no doubt about that." 

Let's block ads! (Why?)

Read More

Education apps are sending your location data and personal info to advertisers – CNET

gettyimages-1263992848

Apps for teaching kids can come at the price of your privacy, researchers warn.

Getty Images
For the most up-to-date news and information about the coronavirus pandemic, visit the WHO website.

With the coronavirus pandemic pushing schools online out of public health concerns, parents and teachers are turning to digital alternatives like apps to bridge the virtual gap. While kids can learn via these apps, hundreds of advertisers are learning about them, too. 

Researchers from the International Digital Accountability Council looked at 496 education apps across 22 countries, finding privacy issues with many of these services. Several apps were providing location data to third-party advertisers, and also collected device identifiers that can't be reset unless you buy a new phone. 

While the majority of apps examined in the report met privacy standards, the scale of data collection discovered raised alarms about the nature of education apps. 

Researchers found that 79 out of 123 apps manually tested were sharing user data with third parties. That data going to advertisers could include your name, email address, location data and device ID. The study also found that more than 140 third-party companies were getting data from ed tech apps, the majority of which went to Facebook, followed by Google. 

Security researchers often find privacy issues with apps, many of which harvest data from devices even when you don't give consent. 

Even if you do give permission, the data is often shared with multiple third parties that use the data in their own ways. You may allow your weather app to get your location for accurate forecasts, but that app's data partners can use it for advertising or law enforcement purposes

App creators often also use software development kits, or SDKs, as shortcuts rather than making their software from scratch, which can also lead to data-stealing schemes

Security researchers will analyze network traffic and examine code on apps to figure out where the data is going, but the average person shouldn't be expected to learn this skill to protect their privacy. 

These privacy concerns are common across apps, but it's a bigger issue among education apps since the majority of people using them are children. Education apps with millions of downloads are sharing location data on kids without their knowledge, the report found. 

"When you have a population of users that is so heavily focused on younger people, that raises sensitivity that developers should be aware of and the platforms should be vigilant about," IDAC president Quentin Palfrey said.

Now playing: Watch this: Tips and tricks to make your iPhone work for you this...

1:48

Learning about you 

The researchers manually tested 78 Android apps and 45 iOS apps, some of which overlapped, for a combined 98 unique apps. They also automatically tested 421 Android apps in their research. 

The manual tests are more thorough, and look at how personal data is collected, who it's sent to, and what kind of information is being taken. 

The study found 27 apps that were taking location data. Some had a purpose for needing that information -- like constellation apps that used your location to tell you what stars are above you in real time. Other apps had more questionable reasons for gathering your location data, like apps for learning programming languages like JavaScript and SQL.

One app, Shaw Academy, was collecting location data and personal identifiers and sending it to third-party marketing firm WebEngage.  In June, Shaw Academy boasted that its online educational platform saw a nearly eightfold increase since COVID-19 lockdowns began in March, with the majority of its new users aged between 25 and 34. 

Shaw Academy's chief strategy officer, John White, referred to the company's privacy policy, which stated that the company can collect, use and share real-time location data through GPS, Bluetooth and IP address, as well as cell tower locations, to "provide location-based services," but did not explain what the services are. 

"This location data is collected anonymously, unless the user provides consent. The user may withdraw consent to Shaw Academy and our partners' collection, use, transmission, processing and maintenance of location and account data at any time by not using the location-based features and turning off the Location Services settings (as applicable) on the users device and computer," White said in an email.

WebEngage, which specializes in targeted advertising on Facebook, email and push notifications, boasts that it tracks 400 million people per month. The company didn't respond to a request for comment. 

Even when apps aren't collecting your location data specifically, if they're collecting data related to your Wi-Fi like your router details, that serves as a de facto location marker. 

Router data is often tied to locations -- unless you're actively moving your router around -- which means that advertisers are aware when you're on a home Wi-Fi network or one in a coffee shop. 

Many of the apps also collected device identifiers along with advertising IDs, which goes against Google's developers policy. Your phone has multiple identifiers, but developers are generally not allowed to collect persistent identifiers. 

You can reset your Android and Apple advertising IDs, but you can't reset your device ID unless you get a new phone. Google's policies don't allow developers to collect both the advertising ID and the device ID, because data brokers can just link new advertising IDs with the permanent device IDs, essentially making the effort useless. 

The manual tests found nine apps that were collecting and sharing this data with third-party advertisers, each of which were installed on at least 10 million devices. The researchers found that Duolingo, a popular language learning app, was sharing Android IDs and advertising IDs with Facebook. 

Duolingo didn't respond to a request for comment. 

On average, the education apps examined shared data with at least three third-party companies. Facebook had the widest pool, getting user data from 128 apps, followed by advertising company Unity, which got data from 72 education apps.

See also: Choosing the right back-to-school laptop for in-school vs. remote learning

An unnamed app, which had more than 1 billion installations, didn't know it was sharing data with the mobile analytics firm Amplitude until the researchers brought it up to the company, the report stated. 

"Our investigation did not reveal any misconduct by these third parties, but the scale and opacity of the data-collection is noteworthy and presents some risks to the health of the ed tech ecosystem," the report said. 

The study also found that 46% of apps it tested used a "potentially concerning" SDK. It collects data in the background, and people wouldn't ever know unless they had the same tools and capabilities as security researchers.

"Our concern is how little users know and can control about what happens once data is collected through a relationship between the app and the SDK," Palfrey said. "If you don't know about it, you can't control it, and you can't say no to it." 

Because these apps are circumventing permissions requests and the trackers are often hidden from public view, it's hard to give advice to parents and teachers who have privacy concerns. The fix relies on regulators and platforms like Google and Apple to kick off misbehaving apps, the watchdog group said. 

"A lot of what we saw are the kinds of things that can be best remedied by good developer practices, good platform oversight or greater regulatory scrutiny," Palfrey said. "As opposed to the kinds of things that parents or teachers on their own are able to remedy."

Let's block ads! (Why?)

Read More

Facebook sues company allegedly behind data-stealing scheme – CNET

cnet-promo-apple-facebook-google-amazon-20

Facebook is suing a data monetization company for allegedly taking Facebook users' info without consent.

Andrew Hoyle/CNET

Facebook filed a lawsuit Thursday against MobiBurn, alleging that apps using code written by the data monetization company harvested information about the social network's users without permission.

Last November, Facebook and Twitter launched investigations into two third-party software development kits (SDKs) that security researchers found were collecting data without consent.

Making an app from scratch takes a lot of time, and SDKs are building blocks developers can use instead. These chunks of code often come at a price to app users, though. SDKs can be free to developers in exchange for user data, which essentially means you can be tracked by companies you've never heard of. When you download an app that finds cheap gas, for instance, your location data may be actively sold to data brokers.

The practice is widespread across the data industry, and companies say it's transparent because it's disclosed in their privacy policies. But studies have found that the majority of people don't read privacy policies, casting doubt on these assertions of transparency.

In its lawsuit, Facebook argues that MobiBurn wasn't transparent about its actions, accusing the company of siphoning data from people's devices without consent. The SDK would grab a digital key for the "Log In with Facebook" feature, and use it to make requests for data from Facebook every 24 hours. 

If your device had an app that was built with MobiBurn's SDK, and that app was also linked to your Facebook account, the app would siphon data such as your name, time zone, email address and gender from your profile, the social network said. 

Facebook sent a cease-and-desist order to the UK-based company last November.

The lawsuit said MobiBurn had its SDK in about 400 apps for gaming, security and utility. In addition to grabbing data from Facebook accounts, the SDK would also take a device's call logs, location data, contacts, browser type, email and other apps installed on the phone, according to court documents.

MobiBurn didn't immediately respond to a request for comment. 

In a November statement, MobiBurn denied the accusations, saying "no data from Facebook is collected, shared or monetized by MobiBurn." 

Facebook accused MobiBurn of paying developers to install its SDK in their apps, where the code remained hidden. The code harvested data until the social network disabled app access last November. MobiBurn has also since disabled its SDK.

The social network said that MobiBurn isn't cooperating with Facebook's request for an audit. The lawsuit marks the first time Facebook has sued a UK app developer. The social network says it wants an injunction to reinforce its ban against MobiBurn using Facebook's platform. It's still seeking an audit. 

"Today's actions are the latest in our efforts to protect people who use our services, hold those who abuse our platform accountable, and advance the state of the law around data misuse and privacy," Jessica Romero, Facebook's director of platform enforcement and litigation, said in a statement.

Now playing: Watch this: Facebook braces for Apple's privacy changes, Fall Guys...

1:35

This isn't the first time Facebook has turned to legal action against alleged data abuse. In February, the social network sued data analytics firm OneAudience for a similar practice, alleging the company paid developers to install its SDK in shopping and gaming apps so it could harvest data. 

Facebook has also sued developers over alleged data scraping abuse, ad fraud and hacking campaigns

Along with Thursday's lawsuit against MobiBurn, Facebook also announced litigation against Nakrutka, a service it accused of using bots to generate fake likes, comments, views and followers on Instagram.

The service's website, which is entirely in Russian, openly markets fake engagement from bots. 

Nakrutka didn't immediately respond to a request for comment. 

Let's block ads! (Why?)

Read More

Election security officials find no evidence of coordinated fraud with mail-in ballots – CNET

vote-elections-2020-trump-pence-biden-harris-0656

Election officials on Wednesday said they haven't seen any evidence of mail-in voter fraud.

James Martin/CNET
This story is part of Elections 2020, CNET's coverage of the run-up to voting in November.

Senior US officials said Wednesday that the government hasn't seen evidence of a coordinated effort to commit mail-in voting fraud, a claim President Donald Trump and some members of his administration have made for months. 

At a briefing, intelligence officials who have been consulting with election workers across all 50 states said they haven't found evidence to support Trump's claims. 

"We have not seen, to date, a coordinated national voter fraud effort during a major election," said a senior Federal Bureau of Investigation official, who spoke on background. "It would be extraordinarily difficult to change a federal election outcome through this type of fraud alone, given the range of processes that would need to be affected or compromised by an adversary at the local level."

The range of processes include being able to find every registered voter's address, forge their signatures, and replicating the barcodes and special stock the ballots are printed on.

The comments follow a series of statements by Trump, Attorney General William Barr and other members of the administration that foreign countries could print counterfeit ballots and sway the outcome of the US presidential election. Election experts and officials behind the process point out that mail-in voter fraud is nearly impossible to pull off. 

Concerns over mail-in ballots have risen amid a surge in demand caused by the coronavirus outbreak. Several states have changed policies around absentee voting to protect public health and keep crowds to a minimum. 

Trump has fought these changes, arguing that mail-in voting will be "substantially fraudulent." Many of his claims on Twitter have been flagged by the social network for misleading information.

At the Wednesday briefing, the FBI official said the bureau has 56 field offices with agents and elections crimes coordinators frequently running through election fraud scenarios and working with local counties to safeguard mail-in ballots.

Officials at the Cybersecurity and Infrastructure Security Agency, a Department of Homeland Security branch that oversees election security, also said it hasn't seen any efforts from foreign actors to commit mail-in ballot fraud. 

CISA officials said the agency is coordinating with election officials across the country to safeguard against cyberattacks. CISA sensors installed in local county networks have detected probes and scans for known vulnerabilities, but no significant attacks against election infrastructure. 

"We have no information or intelligence that any nation-state threat actor is engaging any kind of activity undermining any part of the mail-in vote or ballots," said a senior CISA official, who also spoke on background. 

The officials urged Americans to be patient for election results and vigilant for disinformation surrounding the election. The rise in mail-in ballots is expected to cause a delay in final results and several states will take as long as a week to count mail-in ballots. 

There's still potential for disinformation campaigns, the officials said, adding that Russian, Chinese and Iranian efforts to affect election outcomes are ongoing. In early August, the Office of the Director of National Intelligence released a report stating that Russian campaigns are backing Trump while Chinese threat actors are pushing for the Democratic nominee Joe Biden.

"We encourage Americans to consume information with a critical eye," a senior official from the office of the Director of National Intelligence said on background. "Check out your sources before reposting messages."

Now playing: Watch this: Trump's top cybersecurity official: Mail-in ballots are...

25:16

Let's block ads! (Why?)

Read More

Google court docs raise concerns on geofence warrants, location tracking – CNET

google-maps-logo-phone-3788

Google's staffers raised concerns about geofence warrants and confusion over location tracking settings. 

Angela Lang/CNET

Geofence warrants are a concern among privacy advocates and lawmakers, and recently unsealed court documents show that Google engineers also have issues with the sweeping requests for location data. 

On Friday, Arizona's attorney general published internal emails from Google obtained as part of an ongoing lawsuit by the state on alleged consumer fraud and location data. Google had fought to keep its internal discussions secret, saying the investigation was "improperly publicized."

On Aug. 5, a judge's order ruled that Google had to affirmatively move to seal documents, and anything the tech giant didn't take action on would be released publicly, an Arizona attorney general spokesperson said. Of the 270 documents obtained by the state attorney general's office, 33 have been made public.

The released documents show internal discussions among Google engineers and communications staff that highlighted frustrations over the company's collection of location data and the lack of meaningful controls for its billions of users. 

"Location off should mean location off, not 'except for this case or that case,'" a Google engineer wrote in an email thread on Aug. 13, 2018. "The current UI feels like it is designed to make things possible, yet difficult enough that people won't figure it out." 

The discussions also included worries about geofence warrants -- requests for location data in which law enforcement provides a time and a place, and Google responds with information on all devices that were in that area. 

Alphabet-owned Google isn't the only company that has location data, but it does receive the majority of geofence warrants because of its Sensorvault database, which stores location history for millions of people, and its vast amount of users. 

"Privacy controls have long been built into our services and our teams work continuously to discuss and improve them. In the case of location information, we've heard feedback, and have worked hard to improve our privacy controls," said Jose Castaneda, a Google spokesperson. "In fact, even these cherry picked published extracts state clearly that the team's goal was to 'Reduce confusion around Location History Settings.'" 

Geofence warrants face constitutional challenges in Virginia, and lawmakers in New York have proposed a bill to make them illegal. In Illinois, a federal judge on Monday struck down the practice, finding that the warrants violated the Fourth Amendment

Police have increasingly used geofence warrants, with a 1,500 percent rise from 2017 to 2018, and a subsequent 500 percent increase from 2018 to 2019. The surge in geofence warrant requests, coupled with confusion among Google staff about location data, rang privacy alarms within the search giant, the court documents show.

After a Google staffer explained there were three different settings for location data -- Location Services, which uses your GPS, Location History, which logs where you've been, and Timeline, which makes an itinerary from your logs -- a software engineer expressed frustration in internal emails. 

"I'd want to know which of these options (some? All? none?) enter me into the wrongful-arrest lottery," the engineer wrote. "And I'd want that to be very clear to even the least technical people."

While other Google staffers on the email thread looked to downplay concerns over geofence warrants, the engineer called the practice scary, pointed out that police were randomly searching for people, and argued the company had a responsibility to protect people's data from government requests. 

"I feel like erring on the side of validating people's expectations for keeping their information away from potentially unreasonable uses by the government is anyone's job who works here," the engineer said in an email on April 5, 2019. 

The internal emails offer a glimpse at how some Google staffers view geofence warrants, a subject the company has been careful in discussing. In recent testimony, CEO Sundar Pichai told Congress the warrants were an important area for lawmakers to have oversight on. 

Privacy advocates are asking for Google to do more against geofence warrants. 

"These emails describe a Google where employees know enough about geofence warrants to be scared, without knowing enough to actually fix the problem," said Surveillance Technology Oversight Project Executive Director Albert Fox Cahn. "The internal fight over geofence warrants is particularly alarming. It highlights just how dependent we are on giant tech firms to push back when police try to weaponize our devices against the public."

Now playing: Watch this: Turn off Google location tracking for real

1:35

'Trying to rein in the overall mess' 

Internal emails from Google going as far back as October 2014 show the company knew that its privacy settings were confusing. 

A presentation titled Simplifying Location History Settings (On Android) noted that "most users don't understand the difference between location reporting and location history." 

Location History, which people need to opt in to on Google Maps, is a log of where you've been. Location Reporting is which devices are the ones providing that data. 

That confusion carried on, with emails from 2016 noting that even Google's own staffers didn't know there were switches to turn off location reporting for each device. An email from 2017 described a project to "rein in the overall mess that we have with regards to data collection, consent and storage." 

The same staffer pointed out Location History specifically, calling it "super messy." 

It appeared to still be a mess by 2018, when the Associated Press published an investigation of Google location tracking that revealed the company still tracked people even after they'd turned the function off.

In internal emails from April 2019, a Google staffer pointed out that he thought he'd turned off tracking. It turned out he'd only turned off location history and that the tracking function was still active. 

"Our messaging around this is enough to confuse a privacy-focused Google [software engineer]. That's not good," the engineer said. "*I* should be able to get *my* location on *my* phone without sharing that information with Google. This may be how Apple is eating our lunch."

The engineer wasn't alone in this criticism, with multiple emails saying the company wasn't doing a good job at explaining how it tracks location data, confusing its own engineers. 

"The real failure is that we shipped a [user interface] that confuses users and requires explanation," a Google staffer said.

Let's block ads! (Why?)

Read More

Former Uber security chief charged for allegedly covering up hack – CNET

uber-chief-security-officer-joe-sullivan-6829.jpg

Joe Sullivan has been charged with obstruction of justice. 

James Martin/CNET

The Department of Justice has indicted Uber's former head of security for allegedly covering up a data breach that affected more than 50 million people. While Uber and its then-chief security officer learned about the hack in 2016, the company didn't publicly disclose it until a year later, prosecutors said. 

Officials said the alleged cover-up came directly from Joe Sullivan, who served as Uber's security chief from April 2015 to November 2017. In October 2016, Uber suffered a data breach. Two hackers, Brandon Charles Glover and Vasile Mereacre, were convicted in October 2019, and were also behind cyberattacks against the online learning website Lynda

The hackers stole data on 57 million drivers and riders -- including names, email addresses and driver's license numbers -- and agreed to delete it for a price.    

Rather than publicly disclosing the hack, which companies are required to do within a certain number of days in states like California, Uber paid the hackers $100,000 and had them sign a nondisclosure agreement. 

Sullivan described the payment as a bug bounty reward, which companies often pay out to researchers who discover security flaws. Prosecutors said the payment was more of a cover-up than a bounty reward. 

"While this case is an extreme example of a prolonged attempt to subvert law enforcement, we hope companies stand up and take notice," FBI deputy special agent in charge Craig Fair said in a statement. "Do not help criminal hackers cover their tracks. Do not make the problem worse for your customers, and do not cover up criminal attempts to steal people's personal data."   

The hack only became public knowledge after a full year, when former Uber CEO Travis Kalanick was forced out and replaced by Dara Khosrowshahi. Sullivan had briefed the new CEO about the cyberattack, but edited out details about what data the hackers obtained and when the company paid the hackers. 

The company fired Sullivan after the public disclosure, and paid $148 million in a settlement over the data breach. 

Sullivan has been charged with obstruction of justice and faces a maximum of five years in prison. He is currently the chief security officer of Cloudflare. 

"This case centers on a data security investigation at Uber by a large, cross-functional team made up of some of the world's foremost security experts, Mr. Sullivan included. If not for Mr. Sullivan's and his team's efforts, it's likely that the individuals responsible for this incident never would have been identified at all," Sullivan's attorney Bradford Williams said in a statement. "From the outset, Mr. Sullivan and his team collaborated closely with legal, communications and other relevant teams at Uber, in accordance with the company's written policies. Those policies made clear that Uber's legal department -- and not Mr. Sullivan or his group -- was responsible for deciding whether, and to whom, the matter should be disclosed."  

In private conversations, Sullivan told Uber's security team it needed to "make sure word of the breach did not get out," according to court documents. The data breach also remained hidden from the Federal Trade Commission, which was already investigating Uber over a data breach in 2014.   

"We continue to cooperate fully with the Department of Justice's investigation. Our decision in 2017 to disclose the incident was not only the right thing to do, it embodies the principles by which we are running our business today: transparency, integrity, and accountability." Uber said in a statement. 

The bug bounty payment to Uber's hackers stood out from how the company usually rewarded security researchers. For starters, Uber's bug bounty program had a cap of $10,000, and never paid anything close to $100,000, according to court documents. 

Also, no bug bounty rewards with Uber ever came with a nondisclosure agreement like the ones created for the two hackers. The company's own bug bounty policy also specified that the company wouldn't pay out for data dumps from its servers. 

"Silicon Valley is not the Wild West," said US Attorney David Anderson. "We expect good corporate citizenship. We expect prompt reporting of criminal conduct. We expect cooperation with our investigations. We will not tolerate corporate cover-ups."

Let's block ads! (Why?)

Read More
Page 1 of 612345»...Last »