By msmash from Slashdot's growing-pattern department
An anonymous reader writes: Seoul-born Wendy Hui Kyong Chun, a professor at Brown University known for her work on fake news, is moving to Canada. So is Alan Aspuru-Guzik, a Harvard chemistry professor working on quantum computing and artificial intelligence. They are among 24 top academic minds around the world wooed to Canada by an aggressive recruitment effort offering ultra-attractive sinecures, seven-year funding arrangements -- and, Chun and Aspuru-Guzik said in separate interviews with Axios, a different political environment from the U.S. The "Canada 150 Research Chairs Program" is spending $117 million on seven-year grants of either $350,000 a year or $1 million a year. It's part of a campaign by numerous countries to attract scholars unhappy with Brexit, the election of Donald Trump, and other political trends, sweetened with unusually generous research conditions.Read Replies (0)
By BeauHD from Slashdot's shrouded-in-secrecy department
The Electronic Frontier Foundation's Peter Eckersley writes: Yesterday, The New York Times reported that there is widespread unrest amongst Google's employees about the company's work on a U.S. military project called "Project Maven." Google has claimed that its work on Maven is for "non-offensive uses only," but it seems that the company is building computer vision systems to flag objects and people seen by military drones for human review. This may in some cases lead to subsequent targeting by missile strikes. EFF has been mulling the ethical implications of such contracts, and we have some advice for Google and other tech companies that are considering building military AI systems. The EFF lists several "starting points" any company, or any worker, considering whether to work with the military on a project with potentially dangerous or risk AI applications should be asking: 1. Is it possible to create strong and binding international institutions or agreements that define acceptable military uses and limitations in the use of AI? While this is not an easy task, the current lack of such structures is troubling. There are serious and potentially destabilizing impacts from deploying AI in any military setting not clearly governed by settled rules of war. The use of AI in potential target identification processes is one clear category of uses that must be governed by law.
2.Is there a robust process for studying and mitigating the safety and geopolitical stability problems that could result from the deployment of military AI? Does this process apply before work commences, along the development pathway and after deployment? Could it incorporate the sufficient expertise to address subtle and complex technical problems? And would those leading the process have sufficient independence and authority to ensure that it can check companies' and military agencies' decisions?
< article continued at Slashdot's shrouded-in-secrecy department
>Read Replies (0)
By BeauHD from Slashdot's fork-and-bork department
An anonymous reader quotes a report from The Register: A remote-code execution vulnerability in Windows Defender -- a flaw that can be exploited by malicious .rar files to run malware on PCs -- has been traced back to an open-source archiving tool Microsoft adopted for its own use. The bug, CVE-2018-0986, was patched on Tuesday in the latest version of the Microsoft Malware Protection Engine (1.1.14700.5) in Windows Defender, Security Essentials, Exchange Server, Forefront Endpoint Protection, and Intune Endpoint Protection. This update should be installed, or may have been automatically installed already on your device. The vulnerability can be leveraged by an attacker to achieve remote code execution on a victim's machine simply by getting the mark to download -- via a webpage or email or similar -- a specially crafted .rar file while the anti-malware engine's scanning feature is on. In many cases, this analysis set to happen automatically.
When the malware engine scans the malicious archive, it triggers a memory corruption bug that leads to the execution of evil code smuggled within the file with powerful LocalSystem rights, granting total control over the computer. The screwup was discovered and reported to Microsoft by legendary security researcher Halvar Flake, now working for Google. Flake was able to trace the vulnerability back to an older version of unrar, an open-source archiving utility used to unpack .rar archives. Apparently, Microsoft forked that version of unrar and incorporated the component into its operating system's antivirus engine. That forked code was then modified so that all signed integer variables were converted to unsigned variables, causing knock-on problems with mathematical comparisons. This in turn left the software vulnerable to memory corruption errors, which can crash the antivirus package or allow malicious code to potentially execute.Read Replies (0)
By BeauHD from Slashdot's early-days department
Coinbase announced today that it is launching a new incubator fund for early-stage startups. "We're going to invest off our balance sheet into crypto companies," Coinbase President and COO Asiff Hirji told CNBC's "Fast Money" Thursday. "We will invest in companies that are in the space and are aligned with our values." From the report: Profits from the fund will be "de minimis" in the scope of the entire company but the fund is already off to a $15 million start and set to grow, Hirji said. The fund's seed-stage investments, which will begin this week, will help companies and founders in the crypto and blockchain space get off the ground. It's also meant to focus on building relationships within that ecosystem, he said. In order to do that, Coinbase could be investing in its competitors.
"You may also see us invest in companies that ostensibly look competitive with Coinbase," the San Francisco-based company said in a blog post. "We're taking a long term view of the space, and we believe that multiple approaches are healthy and good." Hirji emphasized that Coinbase Ventures is searching for founders, not the next money-making cryptocurrency. "By giving them access to capital we hope that they will grow great businesses," he said. "It's not about investing in the token, it's not about trying to line up tokens that we would put on our exchange."Read Replies (0)
By BeauHD from Slashdot's cause-and-effect department
Both the United Kingdom and Australia said Thursday that they have opened formal investigations into Facebook amid allegations that their citizens' data was improperly shared with Cambridge Analytica. ABC News reports: The Information Commissioner's Office in the U.K. is "looking at how data was collected from a third party app on Facebook and shared with Cambridge Analytica. We are also conducting a broader investigation into how social media platforms were used in political campaigning," according to Commissioner Elizabeth Denham. The office will investigate Facebook, along with 29 other organizations that have not been named. Earlier Thursday, Australia said it had opened a formal investigation into the tech giant amid allegations that Australian users' data was improperly shared with Cambridge Analytica. "Today I have opened a formal investigation into Facebook, following confirmation from Facebook that the information of over 300,000 Australian users may have been acquired and used without authorization," Angelene Falk, Australia's acting information commissioner and acting privacy commissioner, said. According to Falk, Australia will work with international regulatory agencies to investigate whether Facebook violated the country's privacy act. Under Australian law, the commissioner has the power to issue fines of up to $1.6 million to organizations that fail to comply with the act, according to the Australian Broadcasting Corporation. Australia and the U.K. joined the United States and Israel in investigating Facebook's breach of privacy.Read Replies (0)
By BeauHD from Slashdot's connecting-the-dots department
An anonymous reader writes: Within the past week, two Tesla crashes have been reported while Autopilot was engaged, and both involved a Tesla vehicle slamming into a highway divider. One of the crashes resulted in the death of Walter Huang, a Tesla customer with a Model X. The other crash resulted in minor injuries to the driver, thanks largely to a working highway safety barrier in front of the concrete divider. Ars Technica reports on the growing evidence that Tesla's Autopilot handles lane dividers poorly: "The September crash isn't the only evidence that has emerged that Tesla's Autopilot feature doesn't deal well with highway lane dividers. At least two people have uploaded videos to YouTube showing their Tesla vehicles steering toward concrete barriers. One driver grabbed the wheel to prevent a collision, while the other slammed on the brakes. Tesla argues that this issue doesn't necessarily mean that Autopilot is unsafe. 'Autopilot is intended for use only with a fully attentive driver,' a Tesla spokesperson told KGO-TV. Tesla argues that Autopilot can't prevent all accidents but that it makes accidents less likely. There's some data to back this up. A 2017 study by the National Highway Transportation Safety Administration (NHTSA) found that the rate of accidents dropped by 40 percent after the introduction of Autopilot. And Tesla argues that Autopilot-equipped Tesla cars have gone 320 million miles per fatality, much better than the 86 million miles for the average car. These figures don't necessarily settle the debate. That NHTSA figure doesn't break down the severity of crashes -- it's possible that Autopilot prevents relatively minor crashes but is less effective at preventing the most serious crashes. And as some Ars commenters have pointed out, luxury cars generally have fewer fatalities than the average vehicle. So it's possible that Tesla cars' low crash rates have more to do with its wealthy customer base than its Autopilot technology. What we can say, at a minimum, is that there's little evidence that Autopilot makes Tesla drivers less safe. And we can expect Tesla to steadily improve the car's capabilities over time."Read Replies (0)
By BeauHD from Slashdot's turn-water-into-wine department
An anonymous reader quotes a report from Bleeping Computer: An unknown attacker has exploited a bug in the Verge cryptocurrency network code to mine Verge coins at a very rapid pace and generate funds almost out of thin air. The Verge development team is preparing a hard-fork of the entire cryptocurrency code to fix the issue and revert the blockchain to a previous state before the attack to neutralize the hacker's gains. The attack took place yesterday, and initially users thought it was a over "51% attack," an attack where a malicious actor takes control over the more than half of the network nodes, giving himself the power to forge transactions. Nonetheless, users who later looked into the suspicious network activity eventually tracked down what happened, revealing that a mysterious attacker had mined Verge coins at a near impossible speed of 1,560 Verge coins (XVG) per second, the equivalent of $78/s. The malicious mining lasted only three hours, according to the Verge team. According to users who tracked the illegally mined funds on the Verge blockchain said the hacker appears to have made around 15.6 million Verge coins, which is around $780,000.Read Replies (0)
By msmash from Slashdot's interesting-moves department
Google is betting that algorithms that understand images and text will draw business to its cloud services, make augmented reality popular, and prompt us to search using our smartphone cameras. From a report: The search company's machine learning systems work best on material from a few rich parts of the world, like the US. They stumble more frequently on data from less affluent countries -- particularly emerging economies like India that Google is counting on to maintain its growth. "We have a very sparse training data set from parts of the world that are not the United States and Western Europe," says Anurag Batra, a researcher at Google. When Batra travels to his native Delhi, he says Google's AI systems become less smart. Now, he leads a project trying to change that. "We can understand pasta very well, but if you ask about pesarattu dosa, or anything from Korea or Vietnam, we're not very good," Batra says. To fix the problem, Batra is tapping the brains and phones of some of Google's billions of users. His team built an app called Crowdsource that asks people to perform quick tasks like checking the accuracy of Google's image-recognition and translation algorithms. Starting this week, the Crowdsource app also asks users to take and upload photos of nearby objects.Read Replies (0)
By msmash from Slashdot's closer-look department
Delta Air Lines and Sears Holding on Thursday disclosed a data breach that may have exposed the payment card details of hundreds of thousands of online customers. From a report: The breach originated at a software vendor called 7, which provides Sears, Delta, and other businesses with online chat services. Less than 100,000 Sears customers were supposedly impacted, according to Sears. A Delta spokesperson said hundreds of thousands of travelers are potentially exposed. Gizmodo has learned the breach was the result of a malware attack, and that the unauthorized access involved payment card numbers, CVV numbers, and expiration dates, in addition to customers' names and addresses. In a statement, 7 said the breach occurred on September 27th of last year and was contained roughly two weeks later. In a statement, Sears said it was first notified about the breach in mid-March. Credit card companies have been notified, and law enforcement is likewise investigating the incident. "Customers using a Sears-branded credit card were not impacted," Sears said. "In addition, there is no evidence that our stores were compromised or that any internal Sears systems were accessed by those responsible."Read Replies (0)