By EditorDavid from Slashdot's you-have-GNU-sense-of-humor department
An anonymous reader quotes The Register:
Late last month, open-source contributor Raymond Nicholson proposed a change to the manual for glibc, the GNU implementation of the C programming language's standard library, to remove "the abortion joke," which accompanied the explanation of libc's abort() function... The joke, which has been around since the 1990s and is referred to as a censorship joke by those supporting its inclusion, reads as follows:
25.7.4 Aborting a Program... Future Change Warning: Proposed Federal censorship regulations may prohibit us from giving you information about the possibility of calling this function. We would be required to say that this is not an acceptable way of terminating a program.
On April 30, the proposed change was made, removing the passage from the documentation. That didn't sit well with a number of people involved in the glibc project, including the joke's author, none other than Free Software Foundation president and firebrand Richard Stallman, who argued that the removal of the joke qualified as censorship... Carlos O'Donnell, a senior software engineer at Red Hat, recommended avoiding jokes altogether, a position supported by many of those weighing in on the issue. Among those voicing opinions, a majority appears to favor removal.
But in a post to the project mailing list, Stallman wrote "Please do not remove it. GNU is not a purely technical project, so the fact that this is not strictly and grimly technical is not a reason to remove this." He added later that "I exercise my authority over glibc very rarely -- and when I have done so, I have talked with the official maintainers. So rarely that some of you thought that you are entirely autonomous. But that is not the case. On this particular question, I made a decision long ago and stated it where all of you could see it."
< article continued at Slashdot's you-have-GNU-sense-of-humor department
>Read Replies (0)
By msmash from Slashdot's next-up department
The White House has set up a new task force dedicated to US artificial intelligence efforts, the Trump administration announced today during an event with technology executives, government leaders, and AI experts. From a report: The news and the event, which was organized by the federal government, are both moves to further the country's AI development, as other regions like Europe and Asia ramp up AI investment and R&D as well. The administration will be further investing in AI, deputy CTO of the White House's Office of Science and Technology Policy Michael Kratsios said at the event. "To realize the full potential of AI for the American people, it will require the combined efforts of industry, academia, and government," Kratsios said, according to FedScoop. According to the Trump administration, the federal government has increased its investment in unclassified R&D for AI by 40 percent since 2015. In his speech, Kratsios highlighted ways the US could improve AI advancement, such as robotics startups in Pittsburgh that are models for how to spur job growth in areas hurt by workplace automation. Startups like those now hire engineers, scientists, bookkeepers, and administrators, he said, and are evidence that AI does not necessarily mean massive unemployment is on the horizon. Further reading: The White House says a new AI task force will protect workers and keep America first (MIT Tech Review).Read Replies (0)
By msmash from Slashdot's security-woes department
Apple's Siri, Amazon's Alexa, and Google's Assistant were meant to be controlled by live human voices, but all three AI assistants are susceptible to hidden commands undetectable to the human ear, researchers in China and the United States have discovered. From a report: The New York Times reports today that the assistants can be controlled using subsonic commands hidden in radio music, YouTube videos, or even white noise played over speakers, a potentially huge security risk for users. According to the report, the assistants can be made to dial phone numbers, launch websites, make purchases, and access smart home accessories -- such as door locks -- at the same time as human listeners are perceiving anything from completely different spoken text to recordings of music. In some cases, assistants can be instructed to take pictures or send text messages, receiving commands from up to 25 feet away through a building's open windows. Researchers at Berkeley said that they can modestly alter audio files "to cancel out the sound that the speech recognition system was supposed to hear and replace it with a sound that would be transcribed differently by machines while being nearly undetectable to the human ear."Read Replies (0)
By msmash from Slashdot's their-point-of-view department
The most talked-about product from Google's developer conference earlier this week -- Duplex -- has drawn concerns from many. At the conference Google previewed Duplex, an experimental service that lets its voice-based digital assistant make phone calls and write emails. In a demonstration on stage, the Google Assistant spoke with a hair salon receptionist, mimicking the "ums" and "hmms" pauses of human speech. In another demo, it chatted with a restaurant employee to book a table. But outside Google's circles, people are worried; and Google appears to be aware of the concerns. From a report: "Horrifying," Zeynep Tufekci, a professor and frequent tech company critic, wrote on Twitter about Duplex. "Silicon Valley is ethically lost, rudderless and has not learned a thing." As in previous years, the company unveiled a feature before it was ready. Google is still debating how to unleash it, and how human to make the technology, several employees said during the conference. That debate touches on a far bigger dilemma for Google: As the company races to build uncanny, human-like intelligence, it is wary of any missteps that cause people to lose trust in using its services. Scott Huffman, an executive on Google's Assistant team, said the response to Duplex was mixed. Some people were blown away by the technical demos, while others were concerned about the implications. Huffman said he understands the concerns. Although he doesn't endorse one proposed solution to the creepy factor: Giving it an obviously robotic voice when it calls. "People will probably hang up," he said. [...] Another Google employee working on the assistant seemed to disagree. "We don't want to pretend to be a human," designer Ryan Germick said when discussing the digital assistant at a developer session earlier on Wednesday. Germick did agree, however, that Google's aim was to make the assistant human enough to keep users engaged. The unspoken goal: Keep users asking questions and sharing information with the company -- which can use that to collect more data to improve its answers and services.Read Replies (0)
By msmash from Slashdot's reckoning-is-here department
Young professionals in China are pushing back against employers who expect them to work around the clock, saying no to the decades old "rule of 996" -- working from 9am to 9pm six days a week. From a report: At the forefront are millennials who are often better educated, more aware of their rights and more interested in finding something fulfilling than the previous generation. And as only children (China's one-child policy wasn't eased until 2015), they are also outspoken and pampered. "In my experience young people, especially the post-90s generation, are reluctant to work overtime -- they are more self-centered," says labour rights expert Li Jupeng, one of many who have observed some millennials challenging the 996 concept. The relative affluence of their parents and grandparents is part of the reason. China's rapid economic transformation has given rise to a sizeable middle class, with almost 70% of the country's urban population making between $9,000 and $34,000 annually in 2012. In 2000, that figure was just 4%. As only children, millennials are receiving a lot of support from their families -- including a financial safety net should their careers not go as planned. Although their options for pushing back are limited, some are no longer willing to put in long hours for a meagre paycheck.Read Replies (0)
By msmash from Slashdot's security-woes department
An anonymous reader writes: The source code behind a police breathalyzer widely used in multiple states -- and millions of drunk driving arrests -- is under fire. It's the latest case of technology and the real world colliding -- one that revolves around source code, calibration of equipment, two researchers and legal maneuvering, state law enforcement agencies, and Draeger, the breathalyzer's manufacturer. This most recent skirmish began a decade ago when Washington state police sought to replace its aging fleet of breathalyzers. When the Washington police opened solicitations, the only bidder, Draeger, a German medical technology maker, won the contract to sell its flagship device, the Alcotest 9510, across the state. But defense attorneys have long believed the breathalyzer is faulty. Jason Lantz, a Washington-based defense lawyer, enlisted a software engineer and a security researcher to examine its source code. The two experts wrote in a preliminary report that they found flaws capable of producing incorrect breath test results. The defense hailed the results as a breakthrough, believing the findings could cast doubt on countless drunk-driving prosecutions.Read Replies (0)
By BeauHD from Slashdot's new-tools department
Ars Technica's Eric Berger reports of how dramatic increases in computer power have helped improve the accuracy of hurricane forecasts: Based upon new data from the National Hurricane Center for hurricanes based in the Atlantic basin, the average track error for a five-day forecast fell to 155 nautical miles in 2017. That is, the location predicted by the hurricane center for a given storm was just 155 nautical miles away from the actual position of the storm five days later. What is incredible about this is that, back in 1998, this was the average error for a two-day track forecast. In fact, the annual "verification" report released Wednesday shows that for the hyperactive 2017 Atlantic hurricane season -- which included the devastating hurricanes Harvey, Irma, and Maria -- the National Hurricane Center set records for track forecasts at all time periods: 12-hour, 24-hour, and two-, three-, four- and five-day forecasts.Read Replies (0)
By BeauHD from Slashdot's heads-up department
Yesterday at its I/O developer conference, Google debuted "Duplex," an AI system for accomplishing real world tasks over the phone. "To show off its capabilities, CEO Sundar Pichai played two recordings of Google Assistant running Duplex, scheduling a hair appointment and a dinner reservation," reports Quartz. "In each, the person picking up the phone didn't seem to realize they were talking to a computer." Slashdot reader Lauren Weinstein argues that the new system should come with some sort of warning to let the other person on the line know that they are talking with a computer: With no exceptions so far, the sense of these reactions has confirmed what I suspected -- that people are just fine with talking to automated systems so long as they are aware of the fact that they are not talking to another person. They react viscerally and negatively to the concept of machine-based systems that have the effect (whether intended or not) of fooling them into believing that a human is at the other end of the line. To use the vernacular: "Don't try to con me, bro!" Luckily, there's a relatively simple way to fix this problem at this early stage -- well before it becomes a big issue impacting many lives.
I believe that all production environment calls (essentially, calls not being made for internal test purposes) from Google's Duplex system should be required by Google to include an initial verbal warning to the called party that they have been called by an automated system, not by a human being -- the exact wording of that announcement to be determined.Read Replies (0)
By BeauHD from Slashdot's testing-in-progress department
An anonymous reader quotes a report from The Verge: Just over six months after President Trump announced the creation of a program meant to spur the development of drone trials around the country, the Department of Transportation has announced the first 10 winners. Among those selected, three state transportation agencies, two US cities, and two universities will work with private companies like FedEx and CNN on trials that will see drones used for tasks like package delivery, journalism, healthcare, and more.
Formally known as the Unmanned Aircraft Systems Integration Pilot, the program encourages U.S. cities and states to partner with companies on drone trials that expand how the aircraft are used around the country. This includes, in some cases, allowing drones to fly over crowds, beyond the pilot's line of sight, and at night -- situations that are usually prohibited unless the person flying obtains an official waiver from the FAA. The goal with the program is to accelerate potential commercial applications for drone use. One of the 10 selections is Florida's Lee County Mosquito Control District. The small government agency will use drones to help control mosquito populations by searching for hard-to-find pockets of larvae at a faster rate than inspectors can on foot, while also reducing the risk of being bitten. The Choctaw Nation of Oklahoma will work on flying drones beyond a pilot's line of sight as part of a partnership with CNN. Furthermore, North Carolina's DOT was selected to test the food drone delivery service, Tennessee's Memphis-Shelby County Airport Authority was chosen to test deliveries in partnership with FedEx, and the City of Reno, Nevada was picked to work with Flirtey, a company focused on using drones to deliver medical supplies.Read Replies (0)
By BeauHD from Slashdot's alternative-methods department
States are reportedly turning to nitrogen gas to carry out the death penalty. "Oklahoma, Alabama and Mississippi have authorized nitrogen for executions and are developing protocols to use it, which represents a leap into the unknown," reports The New York Times. "There is no scientific data on executing people with nitrogen, leading some experts to question whether states, in trying to solve old problems, may create new ones." Slashdot reader schwit1 shares an excerpt from a report via The New York Times: What little is known about human death by nitrogen comes from industrial and medical accidents and its use in suicide. In accidents, when people have been exposed to high levels of nitrogen and little air in an enclosed space, they have died quickly. In some cases co-workers who rushed in to rescue them also collapsed and died. Nitrogen itself is not poisonous, but someone who inhales it, with no air, will pass out quickly, probably in less than a minute, and die soon after -- from lack of oxygen. The same is true of other physiologically inert gases, including helium and argon, which kill only by replacing oxygen.
Death from nitrogen is thought to be painless. It should prevent the condition that causes feelings of suffocation: the buildup of carbon dioxide from not being able to exhale. Humans are highly sensitive to carbon dioxide -- too much brings on the panicky feeling of not being able to breathe. Somewhat surprisingly, the lack of oxygen doesn't trigger that same reflex. Someone breathing pure nitrogen can still exhale carbon dioxide and therefore should not have the sensation of smothering.Read Replies (0)
By BeauHD from Slashdot's laying-down-the-law department
Apple has been removing some apps that share location data with third parties and informing developers that their app violates two parts of the App Store Review Guidelines. "The company informs developers via email that 'upon re-evaluation,' their application is in violation of sections 5.1.1 and 5.1.2 of the App Store Review Guidelines, which pertain to transmitting user location data and user awareness of data collection," reports 9to5Mac. From the report: Apple explains that developers must remove any code, frameworks, or SDKs that relate to the violation before their app can be resubmitted to the App Store. Apple's crackdown on these applications comes amid a growing industry shift due to General Data Protection Regulation, or GDPR, in the European Union. While Apple has always been a privacy-focused company, it is seemingly looking to ensure that developers take the same care of user data.
In the instances we've seen, the apps in question don't do enough to inform users about what happens with their data. In addition to simply asking for permission, Apple appears to want developers to explain what the data is used for and how it is shared. Furthermore, the company is cracking down on instances where the data is used for purposes unrelated to improving the user experience.Read Replies (0)
By msmash from Slashdot's setting-precedence department
California regulators said on Wednesday they have unanimously approved a historic plan that will require most new homes in the state have rooftop solar panels that turn sunlight into electricity starting in 2020. From a report: Most new homes built after Jan. 1, 2020, will be required to include solar systems as part of energy-efficiency standards adopted Wednesday by the California Energy Commission. While that's a boost for the solar industry, critics warned that it will also drive up the cost of buying a house by almost $10,000. The move underscores how rooftop solar, once a luxury reserved for wealthy, green-leaning homeowners, is becoming a mainstream energy source, with California -- the nation's largest solar market -- paving the way. The Golden State has long been at the vanguard of progressive energy policies, from setting energy-efficiency standards for appliances to instituting an economy-wide program to curb greenhouse gases. The housing mandate is part of Governor Jerry Brown's effort to slash carbon emissions by 40 percent by 2030, and offers up a playbook for other states to follow.Read Replies (0)