Please create an account to participate in the Slashdot moderation system


Forgot your password?

Comment: Re:IOMMU is still missing. (Score 1) 49

by LoRdTAW (#49681681) Attached to: GPU Malware Can Also Affect Windows PCs, Possibly Macs

...except that their drivers don't use it. Yes, there's a IOMMU in modern CPU. No, current GPU drivers don't use it fully. (According to several source about this proof-of-concept neither Nvidia's nor AMD's drivers do properly use IOMMU to isolate de GPU. They basically just grant the device wholesale access to the memory).

I misunderstood you due to bad verbiage: "No, current GPU drivers don't use it fully." The driver has nothing to do with enabling the IOMMU.
The IOMMU automatically maps a device into its own virtual address space. This prevents a random device from reading arbitrary memory outside of its virtual space. The kernel then uses a table provided by the IOMMU to figure out where things actually are in physical address space. BUT if a driver for that device allows it to read random memory locations, then there is a problem. I assume this stems from the newer GPGPU and HSA functionality which aims to reduce overhead by allowing the video card to read certain memory locations directly instead of copying.

The driver does not have to enable the IOMMU, that is automatic. The driver lives in kernel space and from there can do what it damn well pleases in terms of reading/writing memory if the developer inserts such functionality. The driver isn't disabling the IOMMU or failing to enable it, it is allowing the malicious code to read arbitrary memory through vulnerabilities at the kernel level. This bypasses the IOMMU, not disables it. The only protection would be to better enforce memory access privileges in the kernel and/or remove the arbitrary memory access problem.

A good analogy would be a quarantine facility with individual outer doors for each room, each of which is occupied by a single patient (a device). Patients can come and go as they please using their doors. But inside the facility, there is a hall which connects all of the rooms via a locked door for each room (IOMMU). Patients cant open that door but someone with a key can (the kernel). From the hall, a nurse can visit any patient(driver). But a patient can not leave the room through the locked door without that nurse. This isolates the patients from each other. *BUT* if the patient fools the nurse into allowing them into the hall, or the nurse allows them to wander out of the door, then all bets are off. That is what is happening here, the malicious code running on the GPU is fooling the nurse (driver) into leaving the room and wandering into another.

The kernel is the weak spot of any OS. It marshals userspace code and prevents it from reading arbitrary memory and segments users. But once inside the kernel, code can do whatever the kernel allows which is pretty much anything. I can write a module that allows arbitrary memory access from userspace if I wanted.

Comment: Re:Sooo.... (Score 2, Insightful) 95

by LoRdTAW (#49681133) Attached to: 'Venom' Security Vulnerability Threatens Most Datacenters

Not sure where you are getting this floppy business from. Virtualbox guest addition tools are loaded from a single CD image. All the driver packages are on that image. Hyper-V also uses a CD image. I have also used VMware in the past and they too used CD images.

Perhaps you are confusing that with the provided floppy controller emulation.

Comment: Re:IOMMU (Score 1) 67

by LoRdTAW (#49673137) Attached to: Proof-of-Concept Linux Rootkit Leverages GPUs For Stealth

The IOMMU does take care of the DMA problem but I am betting this has something to do with how the GPU kernel talks to the OS kernel. The OS controls memory so perhaps some driver exploit fools the kernel into reading the wrong memory. The GPU says hey I need this memory and since the GPU driver lives in kernel space, it's possible it could randomly read protected memory.

Comment: Re:...with no memory protection. (Score 1) 49

by LoRdTAW (#49673079) Attached to: GPU Malware Can Also Affect Windows PCs, Possibly Macs

Nvidia and AMD need to properly implement support for IOMMU & the MMU inside the GPU itself.

The IOMMU solved this by even giving each IO device its own virtual IO memory space. So no, the GPU can't randomly read protected memory. Nvidia and AMD don't have to implement anything as this is the job of the IOMMU, not the endpoint device itself.

This was the same problem with the Firewire DMA exploit. Essentially before the IOMMU it was possible for any PCI card to randomly read any memory it wanted to. Firewire being a memory mapped serial bus, easily allowed DMA access to a running or crashed machine. It was used for Linux kernel debugging because even if the kernel crashed taking out the machine, the other I/O subsystems were still functional. You could still dump the contents of RAM to another PC to analyze what went wrong.

Comment: Re:Only 30 Grand? (Score 1) 426

by LoRdTAW (#48792959) Attached to: Chevrolet Unveils 200-Mile Bolt EV At Detroit Auto Show

Nah, it's just ignorance. The average person doesn't understand how diesel differs from gasoline. All they know is it goes into a vehicle and makes it run, just like gasoline. So I assume they just wind up calling any fluid that goes into a fuel tank, gas.

Hell I bet if you asked the average person what diesel is a likely answer might be "gas for trucks".

Comment: DIY or kit? (Score 1) 68

by LoRdTAW (#48692989) Attached to: Ask Slashdot: Best Wireless LED Light Setup for 2015?

Are you looking for a diy or a more off the shelf setup?

The most straightforward way for off the shelf might be to use a dmx (not the rapper) controlled lighting system. Very common and well documented protocol used to control lighting for commercial and entertainment setups. There are also a few others. Since it is also used extensively in the entertainment business, there might be software off the shelf that will sequence the lights and music. You can easily find usb-dmx controllers for well under a hundred bucks. Another bonus is there are also outdoor rated dmx lights and related components making it safe and easy for you.
Start here then read my rantings below: <url:>
Why are you asking about wireless? What exactly do you want to be wireless? Each bulb or strand? Or, do you want the connection between your lights and controller to be wireless?
Either way you need to turn the lights on and off. For that you simply use relays. There are tons of arduino shields out there that feature a few 5 or 10 amp relays which are enough to drive a few strands of incandescent lights lights each. A zigbee shield can be used to make them wireless.
There are also plenty of simple usb controlled relay boards out there as well. I would go that route and possibly try to find one that is already in an enclosure with sockets and over current protection via a fuse or circuit breaker. There might be relay boards that are zigbee or wifi enabled but if you go wireless first: make sure you are not putting a non outdoor nema or iec rated box full of relays outside, exposed to the elements. The nema rating should be 4x and the iec ip rating should be 65 or better. My advice? Use extension cords and locate the relay box indoors.
The usb relay box should come with libraries that you can use to write your own software to control them. Many are just usb-serial devices that use a simple ascii protocol. Usually something like a command character like, followed by an address number for the relay and terminated with a carriage return. If you use an arduino there are probably simple i/o libraries for zigbee but you still might have to roll your own code for the arduino to glue everything together.
Remember for resistive loads like incandescent or basic led lights, watts = volts * amps. So if the relay is rated at 5 amps then the maximum wattage is 120 volts times 5 amps equals 600 watts maximum. If you live in a 230v country then you can double that if the relay is rated for use with 230-240v (they usually are but always check). Also be sure the total load can be handled by the outlet that you are plugging into. Back in the day when i used to get fancy my draw was around 16a. I used a heavy duty 12 awg cord that ran into my basement to a 20a dedicated circuit that came straight out of the panelbox.
If you are looking to control the brightness then it gets tricky. Led christmas lights normally are on a series string with basic current control using a resistor. That is why they have a slight flicker. Those can be dimmed using a few tricks. Ac is not easily varied without elaborate inverters using digital control. But if the load is mainly resistive such as incandescent or cheap led's then you can use a mosfet inside of a bridge rectifier to pwm the ac. Easier and better better than the old phase fired thyristor method of power control. The way it works is you wire a mosfet across the + and - dc "output" of the bridge. The ac terminals of the bridge are put in series with the load. This way you can pwm the ac waveform using a single mosfet. I have not used it outside of a simulator and have seen it only used once to control a tesla coil. But if the led's are current controlled then it won't work as the current controller looks to regulate the brightness of the led's by watching the current and switching the power on and off very rapidly to maintain a constant current. Led christmas lights that don't flicker probably use this method. And in retrospect, led's might flicker too much to be of ant use. In my opinion, don't bother with dimming unless you plan to get really involved with electronics and building your own circuits using mains voltages. And of course, you should be experienced with this type of mains voltage electronics work.
If you want individual bulbs then forget it. It is not impossible. But it is very labour intensive, complex and time consuming.
And lastly, for the actual controller, you can use a raspberry pi, beaglebone black or even the hard kernel odroid-c1. Write your code to play a music file in sync with the lighting routine, might even be libraries or existing software out there.
If you roll your own music player and light sequencer it gets a bit complex. Something like an audio library that can easily load a file and play it while another thread watches the progress and fires off commands in sync to your relay boards. The commands can be stored in a text file, each line containing the time along with the lights to be switched on and off. Make a routine to load and parse each line, separated by carriage returns into an array. Then as you watch your music player progress, fire off the commands in that array index when its time is met and then index the array and reparse those commands into buffers. Only go this route of you are looking to roll your own code. Other than that, i am sure someone has already written an open source music/light controller.

(Rant on)
Btw, slashdot. Fuck you and your lameness filter. My post was blocked for no reason. Simply throwing a lameness filter error with no reason as to what is triggering it is mind boggling and rage inducing. I first tried to paste my post into openoffice, change all the caps to lowercase and then just capitalize sentences to keep it sane. Your shit filter still told me to go fuck myself for no reason given. I simply wrapped code tags around it and bypassed the lame lameness filter. I have been on this site for over 10 years, since 99/2000 and I still have to be treated like a child. Fuck you. (Rant off)

Comment: Re: Political Correctness on massive steroids (Score 1) 556

by LoRdTAW (#48639545) Attached to: FBI Confirms Open Investigation Into Gamergate

I like how he name drops Anal Cunt as if the industry is full of AxCx like bands. The first AxCx show I saw had 9 people in attendance. The second was a bit over 20, maybe 30. There are and were plenty of fucked to bands, one of the most offensivly hilarious: The Mentors (the old shit with el duce). They coined genre term "rape rock" Then you had Anal Blast who are no longer due to sick as fuck Don Decker dropping dead from alcohol abuse. He would have strippers or prostitutes come on stage and he would demean them. Then he would shit all over the place. That guy had some serious issues of you look up interviews after he died. There are a ton others out there. I'm sure good ol GG would get some bleeding hearts foaming at the mouth.

Search for porn/gore grind on YouTube and have fun. Some are so bad it's hilarious. And that is the point. Be as bad, offensive or as fucked up as possible.

And the black dude in the Burzum shirt has to be a staged gag or shopped. That or he is going for the biggest ironic hipster award, trying to best the Jewish, Hitler art admirer.

Comment: Subject need not apply or exist... (Score 3, Interesting) 181

by LoRdTAW (#48362407) Attached to: Multi-Process Comes To Firefox Nightly, 64-bit Firefox For Windows 'Soon'

I switched to Chrome a while back when it came out. It supported most of the then new HTML5 features, most importantly, playing youtube videos without flash. At first I used chrome sparingly, it took a bit to get used to. Then, after a few vulnerabilities were found in FF which could allow attackers to read the memory of other tabs, I switched. The internet is a dangerous place, multiprocess sandboxing of tabs made perfect sense. I also really liked its UI which was much more simple: tabs, URL bar and a few controls like forward, back and reload along with a settings button.

But it came with a cost. I connected it to my google account and it also integrated with my phone and tablet bringing my bookmarks, passwords and other credentials across all of my devices. So I am hooked on the convenience of Google integration, for better or worse. Worse most likely. Plus logging into sites that use Google is very convenient. I'm addicted.

So going back to FF for me will be difficult.

My only concern with multi process is memory footprint. FF is great for low memory systems like virtual machines and older systems. Chrome is a memory hog and easily uses a gigabyte or more. Right now with 8 tabs open I have 12 chrome processes, two are close to consuming nearly 300 megs each, one nearly 200 and the remaining are anywhere from 12-87 megs. I assume the three large processes are the ones running the show (windows, IPC, etc). The largest being the parent process that spawns the others. The smaller 8 processes are the actual tabs. That is pretty much 1 gig of RAM for 8 tabs. I have computers and VM's with less running various test systems. FF on those machines clocks in at 250-300 megs under heavy use.

Comment: Re:Watch out Pi (Score 2) 107

by LoRdTAW (#48359829) Attached to: Eben Upton Explains the Raspberry Pi Model A+'s Redesign

Okay, I see you point. Your original statement confused me as you stated "Anyone who really complains about the price of the RPi is expecting it to be something it's not." That made it sound as if the price was the problem, not the performance. Yea, performance wise it does suck but for most basic maker projects it is plenty. Most of those projects turns lights on and off or move an RC servo. Trivial stuff that an old 8051 could handle.

The only device in the price range is the BBB. But it suffers from poor community documentation. There is no decent wiki other than the one on If you ask me the biggest drawback of most of these boards is the laissez-faire attitude of the developers with respect to documentation, library support and finding basic information in general. Just try to find a decent example of programming the BBB in C. There are a few but they only came into being recently and NONE of them are officially from the BBB team. When the BBB was released, you had to post to the mailing list to get help for any other language than JS. Im sorry but that is some real lazy bullshit right there. They made a decent board and ignored the entire documentation part.

If I were developing a board I would ensure:
Tutorial for programming the board in several of the most popular languages: C, C++, Python and Java.
Example code for access each of the I/O features, digital, analog, PWM, I2C, SPI etc.
Thorough documentation on I/O access for writing libraries for other languages e.g. Rust, Haskell, Ada, D, Go, etc.
Arduino C library for ease of application development and porting.
Wiki wrapping all of this together so a user from novice to embedded superstar can waltz in and start writing code after a few minutes of browsing.

The BBB was a great board in many aspects but used JS and Node.js as its development platform of choice. Trying to write C code was a poorly documented black art. Dumb.

Comment: Re:Watch out Pi (Score 2) 107

by LoRdTAW (#48355337) Attached to: Eben Upton Explains the Raspberry Pi Model A+'s Redesign

The vast majority of the projects the RPi are being used for could be done by a microcontroller. So when you compare them against other devices used in the same application then for the same cost of an Arduino you get 15x the speed, 100x the RAM, and Ethernet, and OS with a complete TCP/IP stack ready to go.

There certainly are a lot of overpowered Pi projects out there. Though, the biggest benefit is a full Debian Linux OS running on the board. You can easily create a really nice web based interface and run it all from the board using WiFi or Ethernet without cobbling together a bunch of Arduino shields and figuring out how to communicate with them via serial. An HMI plus logic controller plus development environment wrapped up in one unit so to speak. You also don't need a separate PC to develop, just a keyboard, mouse, and monitor.

Anyone who really complains about the price of the RPi is expecting it to be something it's not. There are plenty of boards far more powerful than the RPi for under $100 and they don't sell anywhere near as well, don't have anywhere near the same number of projects being developed for them and don't have even a fraction of the community support.

No one complains about the Pi's price. In fact, it is its greatest selling point. I think my biggest complaint is that the Pi gets the most press and the others are drowned out by the sea of "Pi noise". There are a few other boards out there:
The Beaglebone Black which is another 20 bucks and has a much more powerful CPU, hardware ethernet and better GPIO. Its layout kinda sucks though, the single USB port is too close to the micro HDMI port which means USB connectors physically interfere with the HDMI port. And micro HDMI ports suck. Another problem is it was just announced that TI might not want to continue supplying the SoC for the BBB forcing the manufacturers to switch to a Broadcom SoC. So its future is unknown. Plus they insist on using Node.js as the primary engine for writing code. Dumb.
There is the UDOO. But it is pointless to cram both an Arduino Due (ARM based) and quad core i.MX6 on the same board. It adds a needless layer of complexity abstracting the I/O from the main CPU via a UART and secondary CPU whilst forcing the burden of communicating between the two on the user. A stupid setup.

After those boards there really isn't any decent competition that brings anything new to the table. It's just another i.MX6 or OMap board that doesn't offer anything that compelling. It runs Linux, big whoop. What about I/O? I need PWM, ADC, and GPIO. Not a few GPIO's broken out to a header. Bunny Huang attached an FPGA to an i.MX6 in his open source laptop. That was a brilliant move. But at $500 for the board I can but an ITX board with a PCI slot and pop a PCI FPGA card on it from Mesa Electronics for far less.

The Intel Galileo is another interesting board as it added arduino library and shield compatibility. So you have a board with Ethernet, USB, runs Linux and supports most of the Arduino libraries. So Arduino users can port their code and take advantage of on board ethernet, huge memories, threading and all the goodness that comes with a full blown Linux PC. That was a pretty damn smart move. But it still lacks CPU power, no display or GPU and its I/O is hung off SPI hardware instead of GPIO registers off an internal bus. So for every step forward, we take two backwards.

The Beagleboard-X15 brings a very powerful SoC to the table. My only concern is software support. If we can develop software for the PRU-ICSS as easy as an Arduino then we can really develop some serious applications. This would be a killer robot board. And the DSP should come with OpenCV support and easy to use libraries so we can write DSP code without needing to be a TI engineer or experienced embedded developer. Abstract the complexity using libraries and good documentation and you will cut through the Pi noise.

"The Street finds its own uses for technology." -- William Gibson