By Unknown Lamer from Slashdot's poking-the-hornet's-nest-for-12-years department
An anonymous reader writes with news that John Cartwright has been forced to shut down the full disclosure list
. The list was created in 2002 in response to the perception that Bugtraq was too heavily moderated
, allowing security issues to remain unpublished and unpatched for too long. Quoting: "When Len and I created the Full-Disclosure list way back in July 2002, we knew that we'd have our fair share of legal troubles along the way. We were right. To date we've had all sorts of requests to delete things, requests not to delete things, and a variety of legal threats both valid or otherwise. However, I always assumed that the turning point would be a sweeping request for large-scale deletion of information that some vendor or other had taken exception to.
I never imagined that request might come from a researcher within the 'community' itself (and I use that word loosely in modern times). But today, having spent a fair amount of time dealing with complaints from a particular individual (who shall remain nameless) I realised that I'm done. The list has had its fair share of trolling, flooding, furry porn, fake exploits and DoS attacks over the years, but none of those things really affected the integrity of the list itself. However, taking a virtual hatchet to the list archives on the whim of an individual just doesn't feel right. That 'one of our own' would undermine the efforts of the last 12 years is really the straw that broke the camel's back.
< article continued at Slashdot
>Read Replies (0)
By Soulskill from Slashdot's almost-as-good-as-the-NSA's-version department
tips news that 'DeepFace,' the software research project
created by Facebook engineers to identify people in pictures, is now accurate 97.25% of the time
. In other words, it's almost as good at recognizing faces as humans, who are able to determine whether two photos show the same person 97.53% of the time. The article says DeepFace reaches that level of accuracy "regardless of variations in lighting or whether the person in the picture is directly facing the camera." It continues,"DeepFace processes images of faces in two steps. First it corrects the angle of a face so that the person in the picture faces forward, using a 3-D model of an 'average' forward-looking face. Then the deep learning comes in as a simulated neural network works out a numerical description of the reoriented face. If DeepFace comes up with similar enough descriptions from two different images, it decides they must show the same face. ... The deep-learning part of DeepFace consists of nine layers of simple simulated neurons, with more than 120 million connections between them. To train that network, Facebook’s researchers tapped a tiny slice of data from their company’s hoard of user images—four million photos of faces belonging to almost 4,000 people."Read Replies (0)
By Soulskill from Slashdot's navigating-the-biotech-maize department
An anonymous reader writes "Though warned by scientists that overuse of a variety of corn engineered to be toxic to corn rootworms would eventually breed rootworms with resistance to its engineered toxicity, the agricultural industry went ahead and overused the corn anyway with little EPA intervention. The corn was planted in 1996. The first reports of rootworm resistance were officially documented in 2011, though agricultural scientists weren't allowed by seed companies to study the engineered corn until 2010. Now, a recent study has clearly shown how the rootworms have successfully adapted to the engineered corn. The corn's continued over-use is predicted, given current trends, and as resistance eventually spreads to the whole rootworm population, farmers will be forced to start using pesticides once more, thus negating the economic benefits of the engineered corn. 'Rootworm resistance was expected from the outset, but the Bt seed industry, seeking to maximize short-term profits, ignored outside scientists.'"Read Replies (0)