AI

Amazon Pledges Up To $50 Billion To Expand AI, Supercomputing For US Government 15

Amazon is committing up to $50 billion to massively expand AI and supercomputing capacity for U.S. government cloud regions, adding 1.3 gigawatts of high-performance compute and giving federal agencies access to its full suite of AI tools. Reuters reports: The project, expected to break ground in 2026, will add nearly 1.3 gigawatts of artificial intelligence and high-performance computing capacity across AWS Top Secret, AWS Secret and AWS GovCloud regions by building data centers equipped with advanced compute and networking technologies. The project, expected to break ground in 2026, will add nearly 1.3 gigawatts of artificial intelligence and high-performance computing capacity across AWS Top Secret, AWS Secret and AWS GovCloud regions by building data centers equipped with advanced compute and networking technologies.

Under the latest initiative, federal agencies will gain access to AWS' comprehensive suite of AI services, including Amazon SageMaker for model training and customization, Amazon Bedrock for deploying models and agents, as well as foundation models such as Amazon Nova and Anthropic Claude. The federal government seeks to develop tailored AI solutions and drive cost-savings by leveraging AWS' dedicated and expanded capacity.
The Almighty Buck

Neon Pays Users To Record Their Phone Calls, Sell Data To AI Firms 34

Neon Mobile, now the No. 2 social networking app in Apple's U.S. App Store, pays users up to $30 per day to record their phone calls and sell the data to AI companies. The app claims to only capture one side of a call unless both parties use Neon, but its terms grant sweeping rights over recordings. TechCrunch reports: The app, Neon Mobile, pitches itself as a money-making tool offering "hundreds or even thousands of dollars per year" for access to your audio conversations. Neon's website says the company pays 30 cents per minute when you call other Neon users and up to $30 per day maximum for making calls to anyone else. The app also pays for referrals.

According to Neon's terms of service, the company's mobile app can capture users' inbound and outbound phone calls. However, Neon's marketing claims to only record your side of the call unless it's with another Neon user. That data is being sold to "AI companies," the company's terms of service state, "for the purpose of developing, training, testing, and improving machine learning models, artificial intelligence tools and systems, and related technologies."

Despite what Neon's privacy policy says, its terms include a very broad license to its user data, where Neon grants itself a: "...worldwide, exclusive, irrevocable, transferable, royalty-free, fully paid right and license (with the right to sublicense through multiple tiers) to sell, use, host, store, transfer, publicly display, publicly perform (including by means of a digital audio transmission), communicate to the public, reproduce, modify for the purpose of formatting for display, create derivative works as authorized in these Terms, and distribute your Recordings, in whole or in part, in any media formats and through any media channels, in each instance whether now known or hereafter developed." That leaves plenty of wiggle room for Neon to do more with users' data than it claims. The terms also include an extensive section on beta features, which have no warranty and may have all sorts of issues and bugs.
Peter Jackson, cybersecurity and privacy attorney at Greenberg Glusker, told TechCrunch: "Once your voice is over there, it can be used for fraud. Now, this company has your phone number and essentially enough information -- they have recordings of your voice, which could be used to create an impersonation of you and do all sorts of fraud."
AI

LinkedIn Set To Start To Train Its AI on Member Profiles (techradar.com) 27

LinkedIn has said it will start using some member profiles, posts, resumes and public activity to train its AI models from November 3, 2025. From a report: Users are rightly frustrated with the change, with the biggest concern isn't the business networking platform will do so, but that it's set to be enabled by default, with users instead having to actively opt out. Users can choose to opt out via the 'data for generative AI improvement' setting, however it will only apply to data collected after they opt out, with data up until that point still retained within the training environment.
Education

College Grads Are Pursuing a New Career Path: Training AI Models (bloomberg.com) 34

College graduates across specialized fields are pursuing a new career path training AI models, with companies paying between $30 to $160 per hour for their expertise. Handshake, a university career networking platform, recruited more than 1,000 AI trainers in six months through its newly created Handshake AI division for what it describes as the top five AI laboratories.

The trend stems from federal funding cuts straining academic research and a stalled entry-level job market, making AI training an attractive alternative for recent graduates with specialized knowledge in fields including music, finance, law, education, statistics, virology, and quantum mechanics.
Network

Cisco Updates Networking Products in Bid To Tap AI-Fueled Demand (bloomberg.com) 8

Cisco is updating its networking and security products to make AI networks speedier and more secure, part of a broader push to capitalize on the AI spending boom. From a report: A new generation of switches -- networking equipment that links computer systems -- will offer a 10-fold improvement in performance, the company said on Tuesday. That will help prevent AI applications from suffering bottlenecks when transferring data, Cisco said. Networking speed has become a bigger issue as data center operators try to manage a flood of AI information -- both in the cloud and within the companies' own facilities. Slowdowns can hinder AI models, Cisco President and Chief Product Officer Jeetu Patel said in an interview. That applies to the development phase -- known as training -- and the operation of the models, a stage called inference. A massive build-out of data centers has made Cisco more relevant, he said. "AI is going to be network-bound, both on training and inference," Patel said. Having computer processors sit idle during training because of slow networks is "just throwing away money."
Supercomputing

The IRS Is Buying an AI Supercomputer From Nvidia (theintercept.com) 150

According to The Intercept, the IRS is set to purchase an Nvidia SuperPod AI supercomputer to enhance its machine learning capabilities for tasks like fraud detection and taxpayer behavior analysis. From the report: With Elon Musk's so-called Department of Government Efficiency installing itself at the IRS amid a broader push to replace federal bureaucracy with machine-learning software, the tax agency's computing center in Martinsburg, West Virginia, will soon be home to a state-of-the-art Nvidia SuperPod AI computing cluster. According to the previously unreported February 5 acquisition document, the setup will combine 31 separate Nvidia servers, each containing eight of the company's flagship Blackwell processors designed to train and operate artificial intelligence models that power tools like ChatGPT. The hardware has not yet been purchased and installed, nor is a price listed, but SuperPod systems reportedly start at $7 million. The setup described in the contract materials notes that it will include a substantial memory upgrade from Nvidia.

Though small compared to the massive AI-training data centers deployed by companies like OpenAI and Meta, the SuperPod is still a powerful and expensive setup using the most advanced technology offered by Nvidia, whose chips have facilitated the global machine-learning spree. While the hardware can be used in many ways, it's marketed as a turnkey means of creating and querying an AI model. Last year, the MITRE Corporation, a federally funded military R&D lab, acquired a $20 million SuperPod setup to train bespoke AI models for use by government agencies, touting the purchase as a "massive increase in computing power" for the United States.

How exactly the IRS will use its SuperPod is unclear. An agency spokesperson said the IRS had no information to share on the supercomputer purchase, including which presidential administration ordered it. A 2024 report by the Treasury Inspector General for Tax Administration identified 68 different AI-related projects underway at the IRS; the Nvidia cluster is not named among them, though many were redacted. But some clues can be gleaned from the purchase materials. "The IRS requires a robust and scalable infrastructure that can handle complex machine learning (ML) workloads," the document explains. "The Nvidia Super Pod is a critical component of this infrastructure, providing the necessary compute power, storage, and networking capabilities to support the development and deployment of large-scale ML models."

The document notes that the SuperPod will be run by the IRS Research, Applied Analytics, and Statistics division, or RAAS, which leads a variety of data-centric initiatives at the agency. While no specific uses are cited, it states that this division's Compliance Data Warehouse project, which is behind this SuperPod purchase, has previously used machine learning for automated fraud detection, identity theft prevention, and generally gaining a "deeper understanding of the mechanisms that drive taxpayer behavior."

AI

Nvidia Dismisses China AI Threat, Says DeepSeek Still Needs Its Chips 77

Nvidia has responded to the market panic over Chinese AI group DeepSeek, arguing that the startup's breakthrough still requires "significant numbers of NVIDIA GPUs" for its operation. The US chipmaker, which saw more than $600 billion wiped from its market value on Monday, characterized DeepSeek's advancement as "excellent" but asserted that the technology remains dependent on its hardware.

"DeepSeek's work illustrates how new models can be created using [test time scaling], leveraging widely-available models and compute that is fully export control compliant," Nvidia said in a statement Monday. However, it stressed that "inference requires significant numbers of NVIDIA GPUs and high-performance networking." The statement came after DeepSeek's release of an AI model that reportedly achieves performance comparable to those from US tech giants while using fewer chips, sparking the biggest one-day drop in Nvidia's history and sending shockwaves through global tech stocks.

Nvidia sought to frame DeepSeek's breakthrough within existing technical frameworks, citing it as "a perfect example of Test Time Scaling" and noting that traditional scaling approaches in AI development - pre-training and post-training - "continue" alongside this new method. The company's attempt to calm market fears follows warnings from analysts about potential threats to US dominance in AI technology. Goldman Sachs earlier warned of possible "spillover effects" from any setbacks in the tech sector to the broader market. The shares stabilized somewhat in afternoon trading but remained on track for their worst session since March 2020, when pandemic fears roiled markets.
Technology

Nvidia Takes an Added Role Amid AI Craze: Data-Center Designer (msn.com) 24

Nvidia dominates the chips at the center of the AI boom. It wants to conquer almost everything else that makes those chips tick, too. From a report: Chief Executive Jensen Huang is increasingly broadening his company's focus -- and seeking to widen its advantage over competitors -- by offering software, data-center design services and networking technology in addition to its powerful silicon brains. More than a supplier of a valuable hardware component, he is trying to build Nvidia into a one-stop shop for all the key elements in the data centers where tools like OpenAI's ChatGPT are created and deployed -- or what he calls "AI factories."

Huang emphasized Nvidia's growing prowess at data-center design following an earnings report Wednesday that exceeded Wall Street forecasts. The report came days after rival AMD agreed to pay nearly $5 billion to buy data-center design and manufacturing company ZT Systems to try to gain ground on Nvidia. "We have the ability fairly uniquely to integrate to design an AI factory because we have all the parts," Huang said in a call with analysts. "It's not possible to come up with a new AI factory every year unless you have all the parts." It is a strategy designed to extend the business success that has made Nvidia one of the world's most valuable companies -- and to insulate it from rivals eager to eat into its AI-chip market share, estimated at more than 80%. Gobbling up more of the value in AI data centers both adds revenue and makes its offerings stickier for customers.

[...] Nvidia is building on the effectiveness of its 17-year-old proprietary software, called CUDA, which enables programmers to use its chips. More recently, Huang has been pushing resources into a superfast networking protocol called InfiniBand, after acquiring the technology's main equipment maker, Mellanox Technologies, five years ago for nearly $7 billion. Analysts estimate that InfiniBand is used in most AI-training deployments. Nvidia is also building a business that supplies AI-optimized Ethernet, a form of networking widely used in traditional data centers. The Ethernet business is expected to generate billions of dollars in revenue within a year, Chief Financial Officer Colette Kress said Wednesday. More broadly, Nvidia sells products including central processors and networking chips for a range of other data-center equipment that is fine-tuned to work seamlessly together.

AI

Artists Are Deleting Instagram For New App Cara In Protest of Meta AI Scraping (fastcompany.com) 21

Some artists are jumping ship for the anti-AI portfolio app Cara after Meta began using Instagram content to train its AI models. Fast Company explains: The portfolio app bills itself as a platform that protects artists' images from being used to train AI, and only allowing AI content to be posted if it's clearly labeled. Based on the number of new users the Cara app has garnered over the past few days, there seems to be a need. Between May 31 and June 2, Cara's user base tripled from less than 100,000 to more than 300,000 profiles, skyrocketing to the top of the app store. [...] Cara is a social networking app for creatives, in which users can post images of their artwork, memes, or just their own text-based musings. It shares similarities with major social platforms like X (formerly Twitter) and Instagram on a few fronts. Users can access Cara through a mobile app or on a browser. Both options are free to use. The UI itself is like an arts-centric combination of X and Instagram. In fact, some UI elements seem like they were pulled directly from other social media sites. (It's not the most innovative approach, but it is strategic: as a new app, any barriers to potential adoption need to be low).

Cara doesn't train any AI models on its content, nor does it allow third parties to do so. According to Cara's FAQ page, the app aims to protect its users from AI scraping by automatically implementing "NoAI" tags on all of its posts. The website says these tags "are intended to tell AI scrapers not to scrape from Cara." Ultimately, they appear to be html metadata tags that politely ask bad actors not to get up to any funny business, and it's pretty unlikely that they hold any actual legal weight. Cara admits as much, too, warning its users that the tags aren't a "fully comprehensive solution and won't completely prevent dedicated scrapers." With that in mind, Cara assesses the "NoAI" tagging system as a "a necessary first step in building a space that is actually welcoming to artists -- one that respects them as creators and doesn't opt their work into unethical AI scraping without their consent."

In December, Cara launched another tool called Cara Glaze to defend its artists' work against scrapers. (Users can only use it a select number of times.) Glaze, developed by the SAND Lab at University of Chicago, makes it much more difficult for AI models to accurately understand and mimic an artist's personal style. The tool works by learning how AI bots perceive artwork, and then making a set of minimal changes that are invisible to the human eye but confusing to the AI model. The AI bot then has trouble "translating" the art style and generates warped recreations. In the future, Cara also plans to implement Nightshade, another University of Chicago software that helps protect artwork against AI scapers. Nightshade "poisons" AI training data by adding invisible pixels to artwork that can cause AI software to completely misunderstand the image. Beyond establishing shields against data mining, Cara also uses a third party service to detect and moderate any AI artwork that's posted to the site. Non-human artwork is forbidden, unless it's been properly labeled by the poster.

AI

Schneider Electric Warns That Existing Datacenters Aren't Buff Enough For AI (theregister.com) 55

The infrastructure behind popular AI workloads is so demanding that Schneider Electric has suggested it may be time to reevaluate the way we build datacenters. The Register reports: In a recent white paper [PDF], the French multinational broke down several of the factors that make accommodating AI workloads so challenging and offered its guidance for how future datacenters could be optimized for them. The bad news is some of the recommendations may not make sense for existing facilities. The problem boils down to the fact that AI workloads often require low-latency, high-bandwidth networking to operate efficiently, which forces densification of racks, and ultimately puts pressure on existing datacenters' power delivery and thermal management systems.

Today it's not uncommon for GPUs to consume upwards of 700W and servers to exceed 10kW. Hundreds of these systems may be required to train a large language model in a reasonable timescale. According to Schneider, this is already at odds with what most datacenters can manage at 10-20kW per rack. This problem is exacerbated by the fact that training workloads benefit heavily from maximizing the number of systems per rack as it reduces network latency and costs associated with optics. In other words, spreading the systems out can reduce the load on each rack, but if doing so requires using slower optics, bottlenecks can be introduced that negatively affect cluster performance.

The situation isn't nearly as dire for inferencing -- the act of putting trained models to work generating text, images, or analyzing mountains of unstructured data -- as fewer AI accelerators per task are required compared to training. Then how do you safely and reliably deliver adequate power to these dense 20-plus kilowatt racks and how do you efficiently reject the heat generated in the process? "These challenges are not insurmountable but operators should proceed with a full understanding of the requirements, not only with respect to IT, but to physical infrastructure, especially existing datacenter facilities," the report's authors write. The whitepaper highlights several changes to datacenter power, cooling, rack configuration, and software management that operators can implement to mitigate the demands of widespread AI adoption.

Social Networks

Questions Raised about Quality of Reddit's New Moderators After Protest-Related Purges (arstechnica.com) 131

Reddit's forum about home food canning used to have two moderators with science-related master's degrees. And Reddit's home automation forum used to be moderated by a former IT worker with decades of networking experiencing — and some training from a professional electrician.

After the great Reddit protests, all three were removed from their positions. But now Ars Technica asks whether Reddit's replacement moderators will be as capable of spotting dangerous advice? In response to concerns that the new r/homeautomation mod team could overlook posts with dangerous misinformation, one moderator requesting anonymity pointed me to the subreddit's sidebar, which has a disclaimer about the dangers of electricity. However, the disclaimer is only visible on old Reddit. The mod doesn't know why...

One of the top complaints I've heard about the Great Reddit Mod Purge is the company's alleged disregard for replaced mods' expertise. The swift, contentious nature of the mod replacements meant that old mods often didn't share advice with new mods. Meanwhile, the users Reddit chose to replace protesting mods may not have been properly vetted. That includes one of the new mods of the 3D-printing-focused subreddit r/ender3, who requested to only be referred to as the subreddit's top moderator. This person replied to a post by the Reddit employee going by u/ModCodeofConduct and requested to mod the subreddit as a "joke," they said. The user got the job despite telling me, "I have never touched a 3D printer in my life, and there is zero activity on my Reddit account related to 3D printing...." [T]hat mod will step down eventually, "as the joke is starting to wear off." But the story suggests that new mods weren't selected with the utmost care...

None of the forcibly removed mods I spoke with have worked with or plan to work with replacement mods to pass on knowledge gained through years of experience... In addition to lost knowledge, new and old mods are also dealing with the loss of third-party apps considered helpful for moderating.

Businesses

Gen-Z Is Taking Courses On How To Send An Email and What To Wear In the Office, According to a WSJ Report (businessinsider.com) 203

Recent graduates from Generation Z, who have primarily experienced virtual classes and remote internships during college, may need to improve their soft skills such as email writing, casual conversation, and appropriate work attire. According to a new report from the Wall Street Journal, companies like KPMG, Deloitte, and PwC are offering training programs to help these employees adapt to the office, focusing on in-person communication, eye contact, conversation pauses, and professional dress. Insider reports: KPMG is offering new hires introductory training that includes how to talk to people in person, with tips on the appropriate level of eye contact and pauses in a conversation, the company's vice chair of talent and culture, Sandy Torchia, told the Journal. Deloitte and PwC also began offering similar trainings earlier this year, the Financial Times reported in May. Similarly, the consulting company Proviti said it expanded its training for new hires during the pandemic to include a series of virtual meetings that focus on issues like how to make authentic conversation, according to the Journal. Scott Redfearn, Protiviti's executive vice president of global human resources, told the Journal the company has had to remind new hires to avoid casual attire like blue jeans with holes in them.

Some universities have also stepped in to bridge the gap. Michigan State University's director of career management, Marla McGraw, told the Journal that companies need to be more direct when it comes to telling new hires what to wear and how to act in the office. The school now requires many of its business majors to take classes that foster soft skills like how to network in person. The Journal reported that one course breaks down a networking conversation by reminding students to pause after they introduce themselves in order to let the other person say their name, as well as respond to signs the other person might be looking to end the conversation. While it's common for companies to host onboarding sessions that cover office dynamics like attire and rules for interpersonal relationships, some experts say younger employees need these reminders now more than ever.

AI

Meta's Building an In-House AI Chip to Compete with Other Tech Giants (techcrunch.com) 17

An anonymous reader shared this report from the Verge: Meta is building its first custom chip specifically for running AI models, the company announced on Thursday. As Meta increases its AI efforts — CEO Mark Zuckerberg recently said the company sees "an opportunity to introduce AI agents to billions of people in ways that will be useful and meaningful" — the chip and other infrastructure plans revealed Thursday could be critical tools for Meta to compete with other tech giants also investing significant resources into AI.

Meta's new MTIA chip, which stands for Meta Training and Inference Accelerator, is its "in-house, custom accelerator chip family targeting inference workloads," Meta VP and head of infrastructure Santosh Janardhan wrote in a blog post... But the MTIA chip is seemingly a long ways away: it's not set to come out until 2025, TechCrunch reports.

Meta has been working on "a massive project to upgrade its AI infrastructure in the past year," Reuters reports, "after executives realized it lacked the hardware and software to support demand from product teams building AI-powered features."

As a result, the company scrapped plans for a large-scale rollout of an in-house inference chip and started work on a more ambitious chip capable of performing training and inference, Reuters reported...

Meta said it has an AI-powered system to help its engineers create computer code, similar to tools offered by Microsoft, Amazon and Alphabet.

TechCrunch calls these announcements "an attempt at a projection of strength from Meta, which historically has been slow to adopt AI-friendly hardware systems — hobbling its ability to keep pace with rivals such as Google and Microsoft."

Meta's VP of Infrastructure told TechCrunch "This level of vertical integration is needed to push the boundaries of AI research at scale." Over the past decade or so, Meta has spent billions of dollars recruiting top data scientists and building new kinds of AI, including AI that now powers the discovery engines, moderation filters and ad recommenders found throughout its apps and services. But the company has struggled to turn many of its more ambitious AI research innovations into products, particularly on the generative AI front. Until 2022, Meta largely ran its AI workloads using a combination of CPUs — which tend to be less efficient for those sorts of tasks than GPUs — and a custom chip designed for accelerating AI algorithms...

The MTIA is an ASIC, a kind of chip that combines different circuits on one board, allowing it to be programmed to carry out one or many tasks in parallel... Custom AI chips are increasingly the name of the game among the Big Tech players. Google created a processor, the TPU (short for "tensor processing unit"), to train large generative AI systems like PaLM-2 and Imagen. Amazon offers proprietary chips to AWS customers both for training (Trainium) and inferencing (Inferentia). And Microsoft, reportedly, is working with AMD to develop an in-house AI chip called Athena.

Meta says that it created the first generation of the MTIA — MTIA v1 — in 2020, built on a 7-nanometer process. It can scale beyond its internal 128 MB of memory to up to 128 GB, and in a Meta-designed benchmark test — which, of course, has to be taken with a grain of salt — Meta claims that the MTIA handled "low-complexity" and "medium-complexity" AI models more efficiently than a GPU. Work remains to be done in the memory and networking areas of the chip, Meta says, which present bottlenecks as the size of AI models grow, requiring workloads to be split up across several chips. (Not coincidentally, Meta recently acquired an Oslo-based team building AI networking tech at British chip unicorn Graphcore.) And for now, the MTIA's focus is strictly on inference — not training — for "recommendation workloads" across Meta's app family...

If there's a common thread in today's hardware announcements, it's that Meta's attempting desperately to pick up the pace where it concerns AI, specifically generative AI... In part, Meta's feeling increasing pressure from investors concerned that the company's not moving fast enough to capture the (potentially large) market for generative AI. It has no answer — yet — to chatbots like Bard, Bing Chat or ChatGPT. Nor has it made much progress on image generation, another key segment that's seen explosive growth.

If the predictions are right, the total addressable market for generative AI software could be $150 billion. Goldman Sachs predicts that it'll raise GDP by 7%. Even a small slice of that could erase the billions Meta's lost in investments in "metaverse" technologies like augmented reality headsets, meetings software and VR playgrounds like Horizon Worlds.

Technology

Courses in the Metaverse Struggle To Compete With Real World (ft.com) 18

Fulfilment of initial promise made for the technology remains elusive. From a report: The Vienna University of Economics and Business (WU) has offered a tantalising prospect to people who want to learn but don't like to leave the house: join us 'virtually, for a postgraduate course in the metaverse.' Students signing up to WU's professional master of sustainability, entrepreneurship and technology programme can complete the entire part-time course -- attending lectures, meeting their classmates for a coffee and so on -- by just logging in via a laptop. The course -- developed in partnership with Tomorrow University of Applied Sciences, an edtech start-up based in Berlin -- is one of many examples where business schools have embraced the metaverse, 3D technology, virtual reality headsets and avatars to extend the reach of management and leadership training.

Setting up the course "provides us with greater reach, making the course more global," explains Barbara Stottinger, dean of WU's executive academy. However, she is quick to add: "Vienna is a great location so coming to campus is still pretty attractive to most of our students." And this is the problem at the heart of why many business schools have been reluctant to enter the metaverse for course tuition: studying in the real world has its advantages. Teaching the interpersonal skills of leadership and networking that are so integral to postgraduate management courses, like the MBA, is better done in person. It also avoids having to fund purchases of the hardware and software necessary for metaverse projects. Meanwhile, the metaverse has been caught in an extreme example of a 'hype cycle.' This is where wild enthusiasm about a new technology turns to widespread rejection, as its reality fails to live up to what is claimed for it.

Education

SANS Institute Founder Hopes to Find New Cybersecurity Talent With a Game (esecurityplanet.com) 15

storagedude writes: Alan Paller, founder of the cybersecurity training SANS Technology Institute, has launched an initiative aimed at finding and developing cybersecurity talent at the community college and high school level — through a game developed by their CTO James Lyne. A similar game was already the basis of a UK government program that has reached 250,000 students, and Paller hopes the U.S. will adopt a similar model to help ease the chronic shortage of cybersecurity talent. And Paller's own Cyber Talent Institute (or CTI) has already reached 29,000 students, largely through state-level partnerships.

But playing the game isn't the same as becoming a career-ready cybersecurity pro. By tapping high schools and community colleges, the group hopes to "discover and train a diverse new generation of 25,000 cyber stars by the year 2025," Paller told eSecurity Planet. "SANS is an organization that finds people who are already in the field and makes them better. What CTI is doing is going down a step in the pipeline, to the students, to find the talent earlier, so that we don't lose them. Because the way the education system works, only a few people seem to go into cybersecurity. We wanted to change that.

"You did an article earlier this month about looking in different places for talent, looking for people who are already working. That's the purpose of CTI. To reach out to students. It's to go beyond the pipeline that we automatically come into cybersecurity through math, computer science, and networking and open the funnel much wider. Find people who have not already found technology, but who have three characteristics that seem to make superstars — tenacity, curiosity, and love of learning new things. They don't mind being faced with new problems. They like them. And what the game does is find those people. So CTI is just moving to earlier in the pipeline."

Networking

San Diego's Connected Streetlights Taught to Recognize Bicycles (ieee.org) 24

Last year the city of San Diego installed 3,200 smart streetlights, each one monitoring 36 x 54 meters of pavement. They originally used the data to time traffic signals -- but now Slashdot reader Tekla Perry summarizes a report from IEEE Spectrum: Developers for the City of San Diego spent months training its smart streetlights to recognize and count bicycles from just about any angle. The system is now monitoring bicycle traffic, but a few issues remain--figuring out how to distinguish between bicycles being ridden--and those doing the riding, like on a bike rack or thrown in a pickup truck.

The software has a similar problem with pedestrian-counting: When a convertible comes into view, it is counted as both a car and a pedestrian--the visible driver.

Security

1 in 3 Michigan Workers Tested Opened A Password-Phishing Email (go.com) 119

An anonymous reader quotes the AP: Michigan auditors who conducted a fake "phishing" attack on 5,000 randomly selected state employees said Friday that nearly one-third opened the email, a quarter clicked on the link and almost one-fifth entered their user ID and password. The covert operation was done as part of an audit that uncovered weaknesses in the state government's computer network, including that not all workers are required to participate in cybersecurity awareness training... Auditors made 14 findings, including five that are "material" -- the most serious. They range from inadequate management of firewalls to insufficient processes to confirm if only authorized devices are connected to the network. "Unauthorized devices may not meet the state's requirements, increasing the risk of compromise or infection of the network," the audit said.
Programming

What Mistakes Can Stall An IT Career? (cio.com) 207

Quoting snydeq: "In the fast-paced world of technology, complacency can be a career killer," Paul Heltzel writes in an article on 20 ways to kill your IT career without knowing it. "So too can any number of hidden hazards that quietly put your career on shaky ground -- from not knowing your true worth to thinking you've finally made it. Learning new tech skills and networking are obvious ways to solidify your career. But what about accidental ways that could put your career in a slide? Hidden hazards -- silent career killers? Some tech pitfalls may not be obvious."
CIO's reporter "talked to a number of IT pros, recruiters, and developers about how to build a bulletproof career and avoid lesser-known pitfalls," citing hazards like burning bridges and skipping social events. But it also warns of the dangers of staying in your comfort zone too long instead of asking for "stretch" assignments and accepting training opporunities.

The original submission puts the same question to Slashdot readers. "What silent career killers have you witnessed (or fallen prey to) in your years in IT?"
Facebook

How Facebook Flouts Holocaust Denial Laws Except Where It Fears Being Sued (theguardian.com) 310

An anonymous reader quotes a report from The Guardian: Facebook's policies on Holocaust denial will come under fresh scrutiny following the leak of documents that show moderators are being told not to remove this content in most of the countries where it is illegal. The files explain that moderators should take down Holocaust denial material in only four of the 14 countries where it is outlawed. One document says the company "does not welcome local law that stands as an obstacle to an open and connected world" and will only consider blocking or hiding Holocaust denial messages and photographs if "we face the risk of getting blocked in a country or a legal risk." A picture of a concentration camp with the caption "Never again Believe the Lies" was permissible if posted anywhere other than the four countries in which Facebook fears legal action, one document explains. Facebook contested the figures but declined to elaborate. Documents show Facebook has told moderators to remove dehumanizing speech or any "calls for violence" against refugees. Content "that says migrants should face a firing squad or compares them to animals, criminals or filth" also violate its guidelines. But it adds: "As a quasi-protected category, they will not have the full protections of our hate speech policy because we want to allow people to have broad discussions on migrants and immigration which is a hot topic in upcoming elections." The definitions are set out in training manuals provided by Facebook to the teams of moderators who review material that has been flagged by users of the social media service. The documents explain the rules and guidelines the company applies to hate speech and "locally illegal content," with particular reference to Holocaust denial. One 16-page training manual explains Facebook will only hide or remove Holocaust denial content in four countries -- France, Germany, Israel and Austria. The document says this is not on grounds of taste, but because the company fears it might get sued.
Crime

Investigation Finds Inmates Built Computers, Hid Them In Prison Ceiling (cbs6albany.com) 258

An anonymous reader quotes a report from WRGB: The discovery of two working computers hidden in a ceiling at the Marion Correctional Institution prompted an investigation by the state into how inmates got access. In late July, 2015 staff at the prison discovered the computers hidden on a plywood board in the ceiling above a training room closet. The computers were also connected to the Ohio Department of Rehabilitation and Correction's network. Authorities say they were first tipped off to a possible problem in July, when their computer network support team got an alert that a computer "exceeded a daily internet usage threshold." When they checked the login being used, they discovered an employee's credentials were being used on days he wasn't scheduled to work. That's when they tracked down where the connection was coming from and alerted Marion Correctional Institution of a possible problem. Investigators say there was lax supervision at the prison, which gave inmates the ability to build computers from parts, get them through security checks, and hide them in the ceiling. The inmates were also able to run cabling, connecting the computers to the prison's network. Furthermore, "investigators found an inmate used the computers to steal the identify of another inmate, and then submit credit card applications, and commit tax fraud," reports WRGB. "They also found inmates used the computers to create security clearance passes that gave them access to restricted areas."

Slashdot Top Deals