By manishs from Slashdot's story-of-gasoline,-convenience,-and-law department
Eric Newcomer, reporting for Bloomberg: A new crop of startups are trying to make gas stations obsolete. Tap an app, and they'll bring the gas to you, filling up your car while you're at work or at home. Filld, WeFuel, Yoshi, Purple and Booster Fuels have started operating in a few cities including San Francisco, Los Angeles, Palo Alto, Nashville, Tennessee, and Atlanta, Georgia. But officials in some of those cities say that driving around in a pickup truck with hundreds of gallons of gasoline might not be safe. "It is not permitted," said Lt. Jonathan Baxter, a spokesman for the San Francisco fire department, adding that if San Francisco residents see any companies fueling vehicles in the city, they should call the fire department. "We haven't talked to them. I don't know about that. It's news to me," said Nick Alexander, co-founder of Yoshi. "You can never ask for permission because no one will give it," said Chris Aubuchon, the chief executive officer at Filld. The Los Angeles Fire Department said it's drafting a policy around gasoline delivery. "Our current fire code does not allow this process; however, we are exploring a way this could be allowed with some restrictions," said Capt. Daniel Curry, a spokesman for the city's fire department.Read Replies (0)
By manishs from Slashdot's humans-need-not-apply department
An anonymous reader writes: It is no secret that machines have come to largely replace physical labor, and computers surpass human beings in processing data. But in the future, the development of artificial intelligence may render humans obsolete even in the realm of emotional intelligence (warning: annoying popup adverts), according to Yuval Harari, a renowned professor of history. Harari said:AI today is able to diagnose your personality and emotional state by looking at your face and recognizing tiny muscle movements. It can tell whether you are tired, excited, angry, joyful, in love ... it can tell these things even though AI itself doesn't feel anger or love. In the future, therefore, AI could drive humans out of the job market and make many humans completely useless, from an economic perspective in areas where human interaction was previously considered crucial. Humans only have two basic abilities -- physical and cognitive. When machines replaced us in physical abilities, we moved on to jobs that require cognitive abilities. ... If AI becomes better than us in that, there is no third field humans can move to.Read Replies (0)
By manishs from Slashdot's curious-case-of-bitcoin,-and-whoever-created-it department
Australian entrepreneur Craig Wright has put an end to the years-long speculation about the creator of Bitcoin. In an interview with the BBC, The Economist ( could be a paywall), and GQ, Wright claimed that he is indeed the person who developed the concepts on which Bitcoin cryptocurrency is built. According to the BBC, Mr. Wright provided "technical proof to back up his claim using coins known to be owned by Bitcoin's creator." Wright writes in a blog post: [A]fter many years, and having experienced the ebb and flow of life those years have brought, I think I am finally at peace with what he meant. If I sign Craig Wright, it is not the same as if I sign Craig Wright, Satoshi[...] Since those early days, after distancing myself from the public persona that was Satoshi, I have poured every measure of myself into research. I have been silent, but I have not been absent. I have been engaged with an exceptional group and look forward to sharing our remarkable work when they are ready. Satoshi is dead. But this is only the beginning. According to Wright's website, he is a "computer scientist, businessman and inventor" born in Brisbane, Australia, in October 1970. Some has questioned the authenticity and importance of "technical proof" Wright has provided. Nik Cubrilovic, an Australian former hacker and leading internet security blogger, wrote, "I don't believe for a second Wright is Satoshi. I know two people who worked with Wright, characterized him as crazy and schemer/charlatan." Michele Spagnuolo, Information Security Engineer at Google added, "He's not Satoshi. He just reused a signed message (of a Sartre text) by Satoshi with block 9 key as 'proof.'"Read Replies (0)
By EditorDavid from Slashdot's school-principles department
theodp writes: Last week, Microsoft and some of the biggest names in tech and corporate America threw their weight behind a Change.org petition that urged Congress to fund K-12 Computer Science education. The petition, started by the tech-backed CS Education Coalition (btw, 901 K Street NW is Microsoft's DC HQ) in partnership with tech-backed Code.org, now has 90,000+ supporters. But three years ago, Microsoft backed a very different Change.org petition that called for corporate America to foot the STEM education bill. "While the need to expand high-skilled immigration is immediate," read the letter to Congress, "we also need to expand STEM opportunities in U.S. education. A positive proposal has emerged in Washington to create a national STEM education fund, paid for only by businesses using green cards and visas. This fund will help prepare Americans for 21st-century STEM jobs. The proposal is supported by a broad coalition [PDF] that includes Microsoft, GE, the National Council of La Raza, the National Association of Manufactures, and the National Science Teachers Association, to name a few." The earlier petition, which wound up with 41,009 supporters, was started by Voices for Innovation, a self-described "Microsoft supported community" that says it's now "proud to support the Computer Science Education Coalition" as part of its efforts to "shape public policies for our 21st century digital economy and society." So, what changed? Well, Mother Jones did warn that what Microsoft promises and what it delivers for education isn't necessarily the same...Read Replies (0)
By EditorDavid from Slashdot's brain-wars department
An anonymous reader writes: OpenAI, a billion-dollar research non-profit backed by Elon Musk and other Silicon Valley executives, just released a public beta of a new Open Source gym for computer programmers working on artificial intelligence. "Nothing beats a competitive environment to motivate developers," says Patrick Moorhead, an analyst at Moor Insights & Strategy. "It's like a monster truck rally for AI programmers."
The gym lets developers run tests in a standardized environment and share their results, and was built by OpenAI to develop algorithms for the non-profit's own research, according to the Christian Science Monitor. "The gym's exercises range from robot simulations to Atari games and are designed to develop reinforcement learning, the type of computer skills needed for motor control, and decision-making. 'Long-term, we want this curation to be a community effort rather than something owned by us,' Greg Brockman and John Schulman wrote in an OpenAI blog post. 'We'll necessarily have to figure out the details over time, and we'd would love your help in doing so.'"Read Replies (0)
By EditorDavid from Slashdot's out-brief-candle department
HughPickens.com writes: Robinson Meyer writes in The Atlantic that in its annual report on "global catastrophic risk," the Global Challenges Foundation estimates the risk of human extinction due to climate change -- or an accidental nuclear war at 0.1 percent every year. That may sound low, but when extrapolated to century-scale it comes to a 9.5 percent chance of human extinction within the next hundred years. The report holds catastrophic climate change and nuclear war far above other potential causes, and for good reason citing multiple occasions when the world stood on the brink of atomic annihilation. While most of these occurred during the Cold War, another took place during the 1990s, the most peaceful decade in recent memory. The closest may have been on September 26, 1983, when a bug in the U.S.S.R. early-warning system reported that five NATO nuclear missiles had been launched and were bound for Russian targets. The officer watching the system, Stanislav Petrov, had also designed the system, and he decided that any real NATO first-strike would involve hundreds of I.C.B.M.s. Therefore, he resolved the computers must be malfunctioning. He did not fire a response.
< article continued at Slashdot's out-brief-candle department
>Read Replies (0)