Have you considered starting a company around the OSS Project? It's typical for a project in your position to spawn a commercial support entity to satisfy support needs, the $$ for which is also used to develop/support the project.
Sadly, there just aren't enough places with lakes to store anything like the amount of power we'd need to store. You also have to deal with transmission loss between the solar site and the point of use. There was this proposal a while back to use massive, carved granite/stone blocks to store power but it doesn't seem to have achieved much mention beyond its initial proposal.
Never mind highway 505...
Up until about the year 2000, I ran a small hardware shop for customers. Gradually, it became clear to me that the value of computers isn't in the hardware, it's in the software and data that they hold.
In response, I reinvented myself and co-developed a company that hosts data for (now) hundreds of clients and tens of thousands of users. Comparing the total hardware value of all our servers to our annual revenue puts hardware expenses (roughly) in petty cash. Servers host a *lot* of data, it's the data and the software used to manage the data that's valuable.
I've had issues with the last several routers, so I recently bought the very first, 100% OSS router. My thinking is that if it's open source, it's probably high quality code, and it's more likely to get updated than proprietary firmware, where they are cash incentivized to just have you buy the new router rather than fix old bugs.
As far as hardware goes, it's mid-range router hardware, N300 Wifi with respectable antennas and a ho hum 100 Mbit hardware switch. The UI was a little odd, more complex and far more options than your typical Wifi router interface.
However, in the month or so that I've had it, it's been the least problematic Wifi I've had in a few years. I live in a densely populated area with quite a few other hotspots in sight, and I haven't noticed any issues where restarting the router made a difference.
I haven't had the chance yet to hack it, but even as just a router, this is a winner. Also, support products that are consumer friendly like this one. It's not even more expensive! (Currently just $52)
Fake Internet connectivity is when some WiFi access point hijacks all DNS requests to take you to some login web page or ad.
So my company presents at trade shows. Trade shows often have Internet service available at ridiculous prices, and frequently, performance is horrible. Often, rather than pay that ridiculous price, we have a laptop set up with the same configuration as our servers, and run with a recent backup copied onto the laptop. This lets us demonstrate our products with a "sandbox" - same as we use for development - without having to bother with the on site Internet.
Our mobile "server" is set up to wildcard DNS to a locally hosted copy of our website. Other vendors, of course, see our hot spot and figure they can use it to get Internet service on somebody else's dime. When they find that all they can get to is our website and product, it's typical for them to get upset - more than once we've been accused of hacking!
Now, set up the hot spot with an SSID like "NoInternetHere" as a way of discouraging trouble.
Raise your hand if you honestly think that Mickey Mouse as a trademark will enter the public domain in 2023.
The ability to network is a skill I've spent a significant amount of time to become adept at.
Nice to see *informed* input!
I would argue that the problem is the flat rate pricing of $/KWH. A KWH produced at 1 AM has far less value than one produced at 7:00 PM. Why are we charging them the same? Much of the issue you mention would largely vanish if electricity prices were negotiated more frequently. EG: hourly or 15 minute increments. If there really is a surplus of power between 10:00-2:00, as you state, then the price during that time of day would be low to accommodate. This would create an incentive to input power when there's matching demand, and let the utility company profit off the difference.
Yes, it's a significant cost to upgrade the power grid and contracts to work this way, but when has it been bad to connect buyers to sellers in a way that reflects an accurate use of resources?
For example, I read a study a while back that pointing solar panels West of due South resulted in a much better match between electricity use and demand
Note that the reverse trend is happening. Thanks to the very low cost of production and distribution, there are many, many, many alternate "shows" out there that you can watch.
Have you missed youtube entirely? What rock have you been hiding under? Also, the place with the most interesting display of documentaries and "non-primary" content is NetFlix. There is a *ridiculous* amount of youtube channels with interesting content.
For example, as a violinist, I like Taylor Davis' work immensely - she mixes violin and many of the themes to movies and games I've loved....
Remember when MTV was a close as you could get to stuff like this?
I see the exact opposite trend. Netflix is growing by gangbusters, but is the epitome of having many shows that "you aren't paying for". It's not a la carte... at all! You pay a flat rate of $8/month and stream whatever you like.
If you combine horrible customer service, high prices, and synchronized broadcasting, and you have unhappy customers switching to clearly better alternatives. "Paying for channels you don't use" is a symptom. The real problem is that they are horrible companies offering a previous generation, substandard service at ridiculous prices that have risen much faster than inflation.
You don't know Microsoft very well, then. They've literally never done anything else!
1) They were late to the party with DOS. They ripped off QDOS and sold it to IBM. It was IBM who launched Microsoft, it was Microsoft's non-exclusive contract with IBM that allowed the IBM compatible market to begin. That had never been done before, and only happened because IBM didn't take the microcomputer seriously.
2) They were late to the party for GUI. Windows was quickly thrown together after trying to work together with IBM and deciding to be dicks to IBM and steal lots of their design work.
3) Windows '95 was a rebrand of "Windows". So was Windows CE ME NT, XP, Vista, Mobile, and RT. In a sense, Windows 7 is the first "debranding" of Windows back to its marketing roots.
4) Microsoft goes through a major change in structure every 2-5 years. It's always made the tech rags, all the way back to the 1980s.
5) Their now dominant office was a rebrand of their MS Word, Excel, and Power Point, which were sold separately.
6) Each of these Office products was a late comer in its field, in part winning due to strange incompatibilities encountered by the "other guys". Remember the phrase "DOS isn't done until Lotus won't run". Lotus 123 was the leading spreadsheet at the time.
and so on.... Just don't pretend that this BS is anything *new*. Market conditions were right, and MS had a combination of luck and determination to make the best of it. The market conditions have changed remarkably.
Scrubbing doesn't thrash your CPU as much as it thrashes I/O. Remember that both I/O and CPU are part of your "load average". This would be expected; it's reading every block on every device in your system.
You're right about the memory; I've forgotten that detail since RAM is cheap. 1 GB per TB is the recommended amount, though I've worked with far less in practice in low/medium write load environments.
There are so many pros for ZFS that I don't even. Until you try it, you won't "get it" - it's more like trying to describe purple to a life long blind guy. But, I'd adjust your list to at least include:
- Data integrity
- Effortless handling of failure scenarios (RAIDZ makes normal RAID look like a child's crayon drawing)
- Replication. Imagine being able to DD a drive partition without taking it offline, and with perfect data integrity.
- Clones. Imagine being able to remount an rsync backup from last tuesday, and make changes to it, in seconds, without affecting your backup?
- Scrub. Do an fsck mid-day without affecting any end users. Not only "fix" errors, but actually guarantee the accuracy of the "fix" so that no data is lost or corrupted.
- Expandable. Add capacity at any time with no downtime. Replace every disk in your array with no downtime, and it can automatically use the extra space.
- Redundancy, even on a single device! Can't provide multiple disks, but want to defend against having a block failure corrupting your data?
- Flexible. Imagine having several partitions in your array, and be able to resize them at any time. In seconds. Or, don't bother to specify a size and have each partition use whatever space they need.
- Native compression. Double your disk space, while (sometimes) improving performance! We compressed our database backup filesystem and not only do we see some 70% reduction in disk space usage, we saw a net reduction in system load as IO overhead was significantly reduced.
- Sharp cost savings. ZFS obviates the need for exotic RAID hardware to do all the above. It brings back the "Inexpensive" in RAID. (Remember: "Redundant Array of Inexpensive Disks"?)
- CPU and RAM overhead comparable to Software RAID 5.
- Requires you to be competent and know how it operates, particularly when adding capacity to an existing pool.
- ECC RAM strongly recommended if using scrub.
- Strongly recommended for data partitions, YMMV for native O/S partitions. (EG:
FYI: The larger Geo Metro 1.3 liter engine produced 70 HP. Cars in the 3,000 lb range fit in the "mid sized sedan" range which typically have 150-225 horsepower.
Yes, it was under powered, but it was not a "Geo Metro".