Hi. I'm Adrian. I work on wireless. It won't do what you say it will do. Don't cheap out on infrastructure.
Go get Windows 3.1 and Works. Stick it in a vmware VM. Cry at how fast the VM is.
Works fine for me on chips supported by dri. The dri2 support is being nailed down now and once that's in it'll work fine on the same bleeding edge Intel hardware Linux does.
I'm the wifi guy. The WiFi is now up to date on Intel and Atheros 11n. I'd like some help with broadcom. I'll do the Intel and Atheros 11ac stuff early next year.
I'm currently evaluating power management. FreeBSD and xorg on my ivybridge lenovo x230 draw 9w when idle. We are ok at using the deep sleep states per core and package but there's room for improvement.
I'm making the turbo boost stuff work out of the box. Powerd is
I'm using an x230 in vesa mode but it works fine if you use the new DRI and xorg code. I do day to day hacking on the lenovo t400, mostly due to the cardbus slot I still use.
The only thing missing is hotplug express card.
So.. It's not perfect. 10.0 will not be laptop great. I expect 10.1 with updated dri2 and xorg along with Intel WiFi fixes and my power management stuff to be great.
Dude. OC48's are small. ISPs connect in multiples of 10GE these days.
... it's called fuzzing.
You spend a bit more time writing some randomisation into your clients so they go off and do completely ridiculous stuff. stuff you can't comprehend. That's why it's random (ie, fuzzing.)
again, this isn't new.
And yes, if you write your client simulation object(s) in something not stupid, you can scale it up to 100,000 active user simulation instances on a single server. Computers are fast.
Why do people keep saying that over and over again?
It's easy. You write a test suite that pretends to be a real user. You script it so there's some actions that aren't just "do A do B do C." You make them make errors. You have them put in garbage details. You have them fill out the forms incorrectly or incompletely. You have them skip pages or press "back".
Then you add a "pretend I'm the internet!" layer in between that simulates latency, so you make sure that your servers can handle the number of concurrent requests going on. A lot of not-so-seasoned web developers still fall for the "it worked on the LAN to 100,000 users, why not on the internet?" latency fallacy. Increased latency (due to RTT, packet drops, TCP retransmits, etc) leads to having more and more sessions going concurrently. That ties up resources at the server end.
Then you add a "pretend shit breaks!" layer. Ie, the user internet connection breaks. They forget and come back after a while, and hit the restart page. The connection dies half way during the transaction.
Then, once you've written that, you create 5 million instances of that. 100,000 per box sounds about right.
This isn't 1995. Computers are really god damned fast.
... replying from my real account.
Yeah, FreeBSD? dev boards? Any interest?
11n will work on Atheros hardware when they either/or:
* update pfsense to work against FreeBSD-10;
* start releasing snapshots of pfsense that work against FreeBSD-HEAD;
* backport the net80211, driver and userland tools from -HEAD to -8 (which I've done a few times, I've just not committed it to FreeBSD.)
11ac is a different story. I'm going to let the Linux side shake out before I start work on the FreeBSD 802.11ac support.
(FreeBSD wireless maintainer.)
Course they can. Dye the rice.
I bet he'd not hate it!
That's not likely to work. iOS and Android are too entrenched. No OEM is going to willingly walk into a new, untested OS.
Right. Then you grow your business, you need more people there.. oh wait. Not everyone wants to live in Santa Cruz. So you have to move.
For good reason. The 17 is a total nightmare during summer and during peak hours. Having to drive to Santa Cruz each day for work would suck. But moving there would isolate you from the rest of the bay area. That may be what you want but it's a high price to pay for a job.
Why? Seriously? Because you want your print layout to be tightly controlled?
It's totally practical to do large scale document editing without WYSIWYG. Know why? Because we all did it before word 6.0 became a defacto standard. We would concentrate on document content first, then design a layout, then flow the content into that layout. Yes, like the HTML/CSS split.
These days people do poster layouts in _excel_.
Gah, sometimes I wish my beard were longer.
The IBM PC, PC/XT has an 8088 (or clones with 8086's) that has a 20 bit address bus. It's still a megabyte, no matter what.
It doesn't matter where you put the BIOS - beginning or end. It's still a megabyte.
The BIOS is up the end there because the 8088 reset jump vectors are at the end of RAM, not the beginning (like the Z80, etc.) So you need to have something at that memory range for the CPU to start executing.
The 8086/8088 software interrupt vectors are at the beginning of your address space. So, there needs to be RAM there. The interrupt handler, NMI handler and all the software vectors can't be in ROM - well, they can be, but then they'd have to jump to RAM at some point to do anything flexible.
* need RAM in the first 4k for jump tables and such (0x00000000 -> 0x00001000)
* need ROM at the end for the reset/power-on vectors
The IBM PC architecture also assumed people would build ROM add-on applications, like BASIC (which they did) but also word processors, spell checkers, etc. That's why there's 8 ROM slots on the PC and PC/XT. But people soon adopted disk applications rather than ROM applications.
So, I don't buy that "it's Gates' fault." The only things I can see he could've done differently are:
* advocate a 68000 CPU - but then he'd have issues at 16MB - and Amiga/MacOS had exactly that
* add more RAM and less peripheral address space - but you're still capped at 1MB
* advocate for an EMS (page-flipping) architecture early on, and encourage people to make use of it.