A practical limit for silicon-based CPUs. I've been told that military uses a different semiconductor material to run CPUs at 100GHz at a much higher temperatures.
I'm not sure they are CPUs in the same sense. You can easily find simpler circuits that operate at such frequencies, e.g. microwave amplifiers, but a modern CPU involves much more than the raw switching speed of transistors. Keeping the core in sync with itself will be harder with a wavelength of 3 mm (This would be for 100 GHz in vacuum, in a solid it would be even less).
AMD best hope this CPU has some actual guts to it for performance / power efficiency.
Perhaps cores-schmores is one way to approach this? Lots of small cores with relatively slow clocks, as higher clocks tend to worsen power efficiency. I'm not discounting Intel's success with single-core performance per se, but I sometimes feel it's aimed at speeding up legacy applications, while those with modern OSes and code are happy with the cheaper multicore offerings from AMD.
Intel won't let AMD die.
Nope, but they can *puts on sunglasses* chip it away little by little.
You're talking about two very different things. The old Monero (pre-0.9) kept the entire blockchain in RAM and so required huge amounts of RAM to be installed in the machine. The current version uses LMDB which is a memory-mapped database. The mmap may use a huge chunk of *virtual address space* but it never uses more than the currently available amount of RAM.
Ah, I didn't realize that. I recently reinstalled Monero when I got a big-ass machine with tons of RAM, and I wasn't sure if its performance was due to the hardware or other improvements. (Previously, it was a pain to use on machines with a measly 8 GB memory.)
Many of these features already exist in bitcoin. Bitcoin transactions also are based on a scripting language, so it is relatively easy to create new transaction types and features.
True. However, the distributed programming aspect is much more prominent in Ethereum, while the Bitcoin community is still largely focused on simple payments.
The VRAM issue was fixed. The latest Monero update represents over a year of work and thousand of comits. It is the shit!
I always build my daemon from the latest github. Anyway, this isn't the only issue making it slow/heavy/conservative when compared to something like Boolberry.
On the fringe you can add Monero
Fringe? IMHO, Monero is the Microsoft of second generation cryptocurrencies -- it's the big, slow, conservative choice of Cryptonote coins. For a leaner and generally more interesting alternative, have a look at Boolberry, but keep Monero in mind for long-term investment. (At the moment, a Monero node is taking over 14 GB of virtual memory on my machine, Boolberry "only" 4.)
It looks like the OP is a newbie to cryptocoins, so let me elaborate a bit. Traditional 'altcoins' are based on the Bitcoin codebase, so for things like proper anonymity, look for independently developed codebases such as Cryptonote (whose implementations include Monero and Boolberry) and Ethereum.
For mining profitability, Boolberry and Ethereum on GPUs are doing nicely at the moment, Bitcoin and Monero not so much. Of course, this may change rapidly and you need to do your homework. Good old bitcointalk.org is still a useful hangaround for learning about coins, though many notable coins have their own forums for more detail.
Bitcoin is still the gold standard in value of cryptocoins, technically viable and well accepted by merchants. Forget about mining it, but don't dismiss it otherwise. For example, the programming aspects of Ethereum were largely present in Bitcoin already, it's just that Ethereum takes these to the front stage and makes them easier to use.
In the physical world, there are plenty of things that involve frequencies in the analog sense, and there you find bandwidth in its original meaning. These things include digital transmissions when you consider their physical representation, so it's important to people that design "broadband" modems, for example. They also include completely analog systems such as human hearing. I understand that laypeople often take scientific terms and use them in some vague, narrow and "wrong" sense, but that's far from having the actual scientific language evolve.
If the network speed were crazy high enough, you could run as if you had completely dynamic RAM online for loads that suddenly require it (that would require an approximately 100Gbps connection, FWIW).
Latency would still be an issue, so this wouldn't replace local RAM for all purposes, though it could be good enough for some cases. It's more like a disk than memory, and many people already use The Cloud(TM) this way, privacy and availability be damned.
the Internet options are 1. Shit 2. Shittier.
Also known as Tier 2, from number two.
I technically know how to type both on German and US keyboards. In practice, I find German layouts to be incredibly tedious -- even when typing German.
I much rather prefer a US keyboard layout and a working "Compose" key. Typing accented character is very straight forward and logical when composing the character from its underlying parts. Yes, it requires multiple keystrokes to type a single character; but I have gotten pretty fast at typing those.
Alternatively, some of my friends/relatives have switched to a US layout and refuse to enter native accented characters altogether. German officially sanctions the use of substitutes "ä" becomes "ae", "Ö" becomes "Oe" and "ß" becomes "ss". Maybe, the French should come up with a similar system.
It's the same issue in Finland, coding on our native layout is excruciating. Fortunately, there are simple ways to change the layout on the fly, for typing longer native texts, such as
setxkbmap "us,fi" -variant "altgr-intl," -option "grp:alt_shift_toggle"
The US intl variant is nice for having combos like AltGr+q for ä rather than separate accent/compose keys.
Were there fewer fools, knaves would starve. - Anonymous