By BeauHD from Slashdot's harder-better-faster department
An anonymous reader quotes a report from DSLReports: Under Section 706 of the Telecommunications Act, the FCC is required to consistently measure whether broadband is being deployed to all Americans uniformly and "in a reasonable and timely fashion." If the FCC finds that broadband isn't being deployed quickly enough to the public, the agency is required by law to "take immediate action to accelerate deployment of such capability by removing barriers to infrastructure investment and by promoting competition in the telecommunications market." Unfortunately whenever the FCC is stocked by revolving door regulators all-too-focused on pleasing the likes of AT&T, Verizon and Comcast -- this dedication to expanding coverage and competition often tends to waver.
What's more, regulators beholden to regional duopolies often take things one-step further -- by trying to manipulate data to suggest that broadband is faster, cheaper, and more evenly deployed than it actually is. We saw this under former FCC boss Michael Powell (now the top lobbyist for the cable industry), and more recently when the industry cried incessantly when the base definition of broadband was bumped to 25 Mbps downstream, 4 Mbps upstream. We're about to see this effort take shape once again as the FCC prepares to vote in February for a new proposal that would dramatically weaken the definition of broadband. How? Under this new proposal, any area able to obtain wireless speeds of at least 10 Mbps down, 1 Mbps would be deemed good enough for American consumers, pre-empting any need to prod industry to speed up or expand broadband coverage.Read Replies (0)
By BeauHD from Slashdot's anti-climactic department
An analysis by more than 200 astronomers has been published that shows the mysterious dimming of star KIC 8462852 --
nicknamed Tabby's star -- is not being produced by an alien megastructure. "The evidence points most strongly to a giant cloud of dust occasionally obscuring the star," reports The Guardian. From the report: KIC 8462852 is approximately 1,500 light years away from the Earth and hit the headlines in October 2015 when data from Nasa's Kepler space telescope showed that it was dimming by unexplainably large amounts. The star's light dropped by 20% first and then 15% making it unique. Even a large planet passing in front of the star would have blocked only about 1% of the light. For an object to block 15-20%, it would have to be approaching half the diameter of the star itself. With this realization, a few astronomers began whispering that such a signal would be the kind expected from a gigantic extraterrestrial construction orbiting in front of the star -- and the idea of the alien megastructure was born.
In the case of Tabby's star, the new observations show that it dims more at blue wavelengths than red. Thus, its light is passing through a dust cloud, not being blocked by an alien megastructure in orbit around the star. The new analysis of KIC 8462852 showing these results is to be published in The Astrophysical Journal Letters. It reinforces the conclusions reached by Huan Meng, University of Arizona, Tucson, and collaborators in October 2017. They monitored the star at multiple wavelengths using Nasa's Spitzer and Swift missions, and the Belgian AstroLAB IRIS observatory. These results were published in The Astrophysical Journal.Read Replies (0)
By BeauHD from Slashdot's short-lived department
SpaceX has reportedly worked with the Air Force to develop a GPS-equipped on-board computer, called the "Automatic Flight Safety System," that will safely and automatically detonate a Falcon 9 rocket in the sky if the launch threatens to go awry. Previously, an Air Force range-safety officer was required to be in place, ready to transmit a signal to detonate the rocket. Quartz reports: No other U.S. rocket has this capability yet, and it could open up new advantages for SpaceX: The U.S. Air Force is considering launches to polar orbits from Cape Canaveral, but the flight path is only viable if the rockets don't need to be tracked for range-safety reasons. That means SpaceX is the only company that could take advantage of the new corridor to space. Rockets at the Cape normally launch satellites eastward over the Atlantic into orbits roughly parallel to the equator. Launches from Florida into orbits traveling from pole to pole generally sent rockets too close to populated areas for the Air Force's liking. The new rules allow them to thread a safe path southward, past Miami and over Cuba.
SpaceX pushed for the new automated system for several reasons. One was efficacy: The on-board computer can react more quickly than human beings relying on radar data and radio transmissions to signal across miles of airspace, which gives the rocket more time to correct its course before blowing up in the event of an error. As important, the automated system means the company doesn't need to pay for the full use of the Air Force radar installations on launch day, which means SpaceX doesn't need to pay for some 160 U.S. Air Force staff to be on duty for their launches, saving the company and its customers money. Most impressively, the automated system will make it possible for SpaceX to fly multiple boosters at once in a single launch.Read Replies (0)
By BeauHD from Slashdot's cease-and-desist department
French President Emmanuel Macron has a rather extreme approach to combat fake news: ban entire websites. In a speech to journalists on Wednesday, Macron said he planned to introduce new legislation to strictly regulate fake news during online political campaigns. Gizmodo reports: His proposal included a number of measures, most drastically "an emergency legal action" that could enable the government to either scrap "fake news" from a website or even block a website altogether. "If we want to protect liberal democracies, we must be strong and have clear rules," Macron said. "When fake news are spread, it will be possible to go to a judge... and if appropriate have content taken down, user accounts deleted and ultimately websites blocked."
Macron, himself a target of election interference, also outlined some less extreme measures in his speech yesterday. He proposed more rigid requirements around transparency, specifically in relation to online ads during elections. According to the Guardian, Macron said the legislation would force platforms to publicly identify who their advertisers are, as well as limit how much they can spend on ads over the course of an election campaign.Read Replies (0)
By BeauHD from Slashdot's matter-of-opinion department
An anonymous reader quotes a report from The Verge: According to a new Pew Research Center survey, defining online harassment is just as complicated for the average American user as it is for huge social media companies -- and the line gets even more fuzzy when gender or race come into the picture. The survey polled 4,151 respondents on various scenarios and asked them whether each one crossed the threshold for online harassment. In one hypothetical, a private disagreement between a man and his friend David is forwarded to a third party and posted online, which escalates to David receiving "unkind" messages, "vulgar" messages, and eventually being doxxed and threatened. When asked whether or not David was harassed, 89 percent of respondents agreed that he was. However, opinions on exactly when the harassment began varied widely: 5 percent considered it harassment when David offends his friend; 48 percent said it's when the friend forwards the conversation; 54 percent said it's when the conversation is shared publicly. Others agreed it crossed the line when David received the unkind messages (72 percent), the vulgar messages (82 percent), is doxxed (85 percent), and threatened (85 percent). There was little difference in responses by gender.
< article continued at Slashdot's matter-of-opinion department
>Read Replies (0)
By BeauHD from Slashdot's on-the-clock department
In a blog post, personal analytics service RescueTime revealed exactly what days and times we were most productive in 2017, by studying the anonymized data of how people spent their time on their computers and phones over the past 12 months. From the report: Simply put, our data shows that people were the most productive on November 14th. In fact, that entire week ranked as the most productive of the year. Which makes sense. With American Thanksgiving the next week and the mad holiday rush shortly after, mid-November is a great time for people to cram in a few extra work hours and get caught up before gorging on Turkey dinner. On the other side of the spectrum, we didn't get a good start to the year. January 6th -- the first Friday of the year -- was the least productive day of 2017.
One of the biggest mistakes so many of us make when planning out our days is to assume we have 8+ hours to do productive work. This couldn't be further from the truth. What we found is that, on average, we only spend 5 hours a day working on a digital device. And with an average productivity pulse of 53% for the year, that means we only have 12.5 hours a week to do productive work. Our data showed that we do our most productive work between 10 and noon and then again from 2-5pm each day. However, breaking it down to the hour, we do our most productive work on Wednesdays at 3pm. RescueTime has a separate blog post detailing how they calculate their productivity scores.Read Replies (0)
By BeauHD from Slashdot's it's-what's-on-the-inside-that-matters department
Popular repair site iFixit has acquired an iMac Pro and opened it up to see what's inside. They tore down the base iMac Pro with an 8-core processor, 32GB of RAM, and a 1TB SSD. Mac Rumors reports the findings: iFixit found that the RAM, CPU, and SSDs in the iMac Pro are modular and can potentially be replaced following purchase, but most of the key components "require a full disassembly to replace." Standard 27-inch iMacs have a small hatch in the back that allows easy access to the RAM for post-purchase upgrades, but that's missing in the iMac Pro. Apple has said that iMac Pro owners will need to get RAM replaced at an Apple Store or Apple Authorized Service Provider. iFixit says that compared to the 5K 27-inch iMac, replacing the RAM in the iMac Pro is indeed "a major undertaking."
Apple is using standard 288-pin DDR4 ECC RAM sticks with standard chips, which iFixit was able to upgrade using its own $2,000 RAM upgrade kit. A CPU upgrade is "theoretically possible," but because Apple uses a custom-made Intel chip, it's not clear if an upgrade is actually feasible. The same goes for the SSDs -- they're modular and removable, but custom made by Apple. Unlike the CPU, the GPU is BGA-soldered into place and cannot be removed. The internals of the iMac Pro are "totally different" from other iMacs, which is unsurprising as Apple said it introduced a new thermal design to accommodate the Xeon-W processors and Radeon Pro Vega GPUs built into the machines. The new thermal design includes an "enormous" dual-fan cooler, what iFixit says is a "ginormous heat sink," and a "big rear vent." Overall, iFixit gave the iMac Pro a repairability score of 3/10 since it's difficult to open and tough to get to internal components that might need to be repaired or replaced.Read Replies (0)
By BeauHD from Slashdot's good-news-for-chipmakers department
In a post on Google's Online Security Blog, two engineers described a novel chip-level patch that has been deployed across the company's entire infrastructure, resulting in only minor declines in performance in most cases. "The company has also posted details of the new technique, called Retpoline, in the hopes that other companies will be able to follow the same technique," reports The Verge. "If the claims hold, it would mean Intel and others have avoided the catastrophic slowdowns that many had predicted." From the report: "There has been speculation that the deployment of KPTI causes significant performance slowdowns," the post reads, referring to the company's "Kernel Page Table Isolation" technique. "Performance can vary, as the impact of the KPTI mitigations depends on the rate of system calls made by an application. On most of our workloads, including our cloud infrastructure, we see negligible impact on performance." "Of course, Google recommends thorough testing in your environment before deployment," the post continues. "We cannot guarantee any particular performance or operational impact." Notably, the new technique only applies to one of the three variants involved in the new attacks. However, it's the variant that is arguably the most difficult to address. The other two vulnerabilities -- "bounds check bypass" and "rogue data cache load" -- would be addressed at the program and operating system level, respectively, and are unlikely to result in the same system-wide slowdowns.Read Replies (0)
By BeauHD from Slashdot's behind-the-scenes department
Reuters tells the story of how Daniel Gruss, a 31-year-old information security researcher and post-doctoral fellow at Austria's Graz Technical University, hacked his own computer and exposed a flaw in most of the Intel chips made in the past two decades. Prior to his discovery, Gruss and his colleagues Moritz Lipp and Michael Schwarz had thought such an attack on the processor's "kernel" memory, which is meant to be inaccessible to users, was only theoretically possible. From the report: "When I saw my private website addresses from Firefox being dumped by the tool I wrote, I was really shocked," Gruss told Reuters in an email interview, describing how he had unlocked personal data that should be secured. Gruss, Lipp and Schwarz, working from their homes on a weekend in early December, messaged each other furiously to verify the result. "We sat for hours in disbelief until we eliminated any possibility that this result was wrong," said Gruss, whose mind kept racing even after powering down his computer, so he barely caught a wink of sleep. Gruss and his colleagues had just confirmed the existence of what he regards as "one of the worst CPU bugs ever found." The flaw, now named Meltdown, was revealed on Wednesday and affects most processors manufactured by Intel since 1995. Separately, a second defect called Spectre has been found that also exposes core memory in most computers and mobile devices running on chips made by Intel, Advanced Micro Devices (AMD) and ARM Holdings, a unit of Japan's Softbank.Read Replies (0)
By msmash from Slashdot's next-up department
Google is accepting "prophylactic" takedown requests to keep pirated content out of its search results, an anonymous reader writes, citing a TorrentFreak report. From the article: Over the past year, we've noticed on a few occasions that Google is processing takedown notices for non-indexed links. While we assumed that this was an 'error' on the sender's part, it appears to be a new policy. "Google has critically expanded notice and takedown in another important way: We accept notices for URLs that are not even in our index in the first place. That way, we can collect information even about pages and domains we have not yet crawled," Caleb Donaldson, copyright counsel at Google writes. In other words, Google blocks URLs before they appear in the search results, as some sort of piracy vaccine. "We process these URLs as we do the others. Once one of these not-in-index URLs is approved for takedown, we prophylactically block it from appearing in our Search results, and we take all the additional deterrent measures listed above." Some submitters are heavily relying on the new feature, Google found. In some cases, the majority of the submitted URLs in a notice are not indexed yet.Read Replies (0)
By msmash from Slashdot's breakthrough department
chalsall writes: Persistence pays off. Jonathan Pace, a GIMPS volunteer for over 14 years, discovered the 50th known Mersenne prime, 2^77,232,917 -- 1 on December 26, 2017. The prime number is calculated by multiplying together 77,232,917 twos, and then subtracting one. It weighs in at 23,249,425 digits, becoming the largest prime number known to mankind. It bests the previous record prime, also discovered by GIMPS, by 910,807 digits. You can read a little more in the press release.Read Replies (0)
By msmash from Slashdot's needs-attention department
Ocean dead zones with zero oxygen have quadrupled in size since 1950, scientists have warned, while the number of very low oxygen sites near coasts have multiplied tenfold. From a report: Most sea creatures cannot survive in these zones and current trends would lead to mass extinction in the long run, risking dire consequences for the hundreds of millions of people who depend on the sea. Climate change caused by fossil fuel burning is the cause of the large-scale deoxygenation, as warmer waters hold less oxygen. The coastal dead zones result from fertiliser and sewage running off the land and into the seas. The analysis, published in the journal Science, is the first comprehensive analysis of the areas and states: "Major extinction events in Earth's history have been associated with warm climates and oxygen-deficient oceans." Denise Breitburg, at the Smithsonian Environmental Research Center in the US and who led the analysis, said: "Under the current trajectory that is where we would be headed. But the consequences to humans of staying on that trajectory are so dire that it is hard to imagine we would go quite that far down that path." "This is a problem we can solve," Breitburg said. "Halting climate change requires a global effort, but even local actions can help with nutrient-driven oxygen decline." She pointed to recoveries in Chesapeake Bay in the US and the Thames river in the UK, where better farm and sewage practices led to dead zones disappearing.Read Replies (0)
By msmash from Slashdot's fixing-things department
Intel said on Thursday that by next week it expects to have patched 90 percent of its processors that it released within the last five years, making PCs and servers "immune" from both the Spectre and Meltdown exploits. The company adds: Intel has already issued updates for the majority of processor products introduced within the past five years. By the end of next week, Intel expects to have issued updates for more than 90 percent of processor products introduced within the past five years. In addition, many operating system vendors, public cloud service providers, device manufacturers and others have indicated that they have already updated their products and services. Intel continues to believe that the performance impact of these updates is highly workload-dependent and, for the average computer user, should not be significant and will be mitigated over time. While on some discrete workloads the performance impact from the software updates may initially be higher, additional post-deployment identification, testing and improvement of the software updates should mitigate that impact. System updates are made available by system manufacturers, operating system providers and others.Read Replies (0)
By msmash from Slashdot's minding-his-own-business department
An anonymous reader shares a report: Fun fact about Facebook's Mark Zuckerberg: Every year he gives himself a "personal challenge," which is not to be confused with the "New Year's resolutions" us plebs do every year. Over the years, he says, thanks to these challenges, he's taken up running as well as learned foreign languages and read books. But this year, as more revelations come to light about the rampant misuse of Facebook's platform -- including the spread of fake news and other forms of disinformation -- Zuckerberg's challenge is to focus on his business. "The world feels anxious and divided, and Facebook has a lot of work to do -- whether it's protecting our community from abuse and hate, defending against interference by nation states, or making sure that time spent on Facebook is time well spent," he writes. "My personal challenge for 2018 is to focus on fixing these important issues." In essence, Zuckerberg is vowing to help fix the problems that plague Facebook, which is his job, something he admits: "This may not seem like a personal challenge on its face," Zuckerberg writes, "but I think I'll learn more by focusing intensely on these issues than I would by doing something completely separate."Read Replies (0)
By msmash from Slashdot's closer-look department
Tom Warren, writing for The Verge: Chrome now has the type of dominance that Internet Explorer once did, and we're starting to see Google's own apps diverge from supporting web standards much in the same way Microsoft did a decade and a half ago. Whether you blame Google or the often slow moving World Wide Web Consortium (W3C), the results have been particularly evident throughout 2017. Google has been at the center of a lot of "works best with Chrome" messages we're starting to see appear on the web. Google Meet, Allo, YouTube TV, Google Earth, and YouTube Studio Beta all block Windows 10's default browser, Microsoft Edge, from accessing them and they all point users to download Chrome instead. Some also block Firefox with messages to download Chrome. Hangouts, Inbox, and AdWords 3 were all in the same boat when they first debuted. It's led to one developer at Microsoft to describe Google's behavior as a strategic pattern. "When the largest web company in the world blocks out competitors, it smells less like an accident and more like strategy," said a Microsoft developer in a now-deleted tweet. Google also controls the most popular site in the world, and it regularly uses it to push Chrome. If you visit Google.com in a non-Chrome browser you're prompted up to three times if you'd like to download Chrome. Google has also even extended that prompt to take over the entire page at times to really push Chrome in certain regions. Microsoft has been using similar tactics to convince Windows 10 users to stick with Edge. The troubling part for anyone who's invested in an open web is that Google is starting to ignore a principle it championed by making its own services Chrome-only -- even if it's only initially.Read Replies (0)