Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Re:Political Absurdism (Score 1) 69

QoS and traffic management can help you cope with a bottleneck and still get some important traffic through. But it can only work by choosing which traffic to drop. Who should decide what traffic is most important? What if your customers start using a new high bandwidth service? Why do you get to decide that this traffic is unimportant and should be dropped?

There is only one equitable solution to this problem. Upgrade the network so that the choke point no longer exists and no traffic needs to be dropped at all. This may mean laying more fiber, upgrading routers, or striking a deal with the biggest producers of data so they can completely bypass the choke point.

But there are often at least two companies involved in the negotiations, (Netflix, Comcast, Cogent, Verizon, ...) and none of them want to pay for the upgrades. So what do you do then? Should we force ISP's to upgrade network links at their expense, passing the costs on to their customers? (IMHO, yes). Or should ISP's be able to strong arm everyone else to pay for the upgrades?

That is the fundamental argument.

Comment Re:Boards or ROM's (Score 2) 133

My colleague is currently designing a C65 in an FPGA, currently running at 28.9x the speed of a C64 but with lots of features still unimplemented. But even designing the hardware at that level, it will be difficult to be completely bug compatible. Particularly since he's driving 1920x1200 video over HDMI.

Comment Re:Why didn't I hear about this before? (Score 1) 143

Lots of phoronix blog posts have made it to the front page of slashdot and talked about this exact issue. It's only recently that the open source drivers have been gaining momentum and becoming usable for gaming on most GPU's. But they've been fine for desktop work for ages.

Comment Re:Find a mentor, and write automated tests (Score 1) 254

Oh, and point 1.5 should have been; Source control!.

When you get something working, commit it. It's like the scientific method, if you can't reproduce something it doesn't exist. Source control gives you confidence to experiment, knowing that you can easily undo everything without losing something that you know works.

Comment Find a mentor, and write automated tests (Score 1) 254

Other people have suggested how to learn the basics of a language, so I'll ignore that problem.

Designing and writing good code is an art form. There are many anti-patterns you may fall into that could doom your project that a more experienced developer can help you avoid.

A couple hours a week spent explaining your design before you start writing code, or helping to track down why your code doesn't work as expected, or reviewing the code you believe is finished, will save you days of wasted effort.

Structure your code so that you can write automated tests to cover *everything*. It will seem like a pain to start with. But once your project picks up speed, it will be invaluable to ensure you never break something that you know already works. Tracking down bugs in old code is painful.

If you do this right, you will get into the habit of writing the tests first, or along side the code you are writing. You will find that you rarely run the code as a user would, because that just wastes time. And when you do finally run the code as a user, it just works.

Comment Re:Code (Score 1) 80

Any way you set it up, your going to need OS support to reconfigure the FPGA. Perhaps a well defined section in the ELF format, with some kind of locking semantics to prevent more than one process from using it. That would depend on how many FPGA resources / CPU cores you have....

Anyone who is going to spend the money to buy and use these CPU's, will have to solve this problem. For the short term, this is likely to only be used in high end clustered server farms, for workloads where you wouldn't want to swap jobs very often anyway.

Comment Re:what? (Score 1) 80

IMHO hardware design tools have had far less investment than compiler tools, and we're overdue to invest more effort in improving them.

Since the FPGA is in the CPU, I assume there are either CPU instructions to pipe data in and out of the FPGA. Or the FPGA may have direct access to the memory controller / cache. Either way you need a good way to synchronise between them.

So consider a solution that takes LLVM bitcode and runtime profiling data. Pick out some number of hot code blocks in an optimisation pass, translate the data flow into VHDL (or write something better....) build that, calculate the final circuit timing, replace the code block with new LLVM intrinsics to hand over control, then in the back end emit the new CPU instructions.

Obviously you'll also need to modify the operating system to manage the configuration of the FPGA, and ensure that you don't blow up the chip by running the wrong code at the wrong time.

But I think there is plenty of scope to implement something like this. There just hasn't been a need to build the tools before now.

Slashdot Top Deals

"A car is just a big purse on wheels." -- Johanna Reynolds

Working...