No. If you aren't free to leave, you're under arrest.
No. If you aren't free to leave, you're under arrest.
You aren't subscribed to a meal, it's a one-shot deal, worst that can happen to them is they refund your money and kick you out. Writing down the rules is generally unnecessary. I guarantee you that if you go in and start shoveling food into a bucket to take away, or try to fill a 50-gallon container with "unlimited refill drinks" you'll be stopped.
Since there are TRUE "unlimited data" plans, there's a different expectation when an ISP says "unlimited data" or "no data caps". Claiming that you get unlimited data, but they'll charge you more if you go over some limit, would be like saying you get unlimited refills, but you have to pay by the ounce if you go over 64 ounces (regardless of the size of your cup).
The problem with the ISPs isn't that they're writing down rules to prevent problems, but that the rules they're creating (data caps) aren't the solution to the problem they claim to be fixing. It's purely based on jacking up their profits, and the only reason they can get away with it is because of a lack of competition in most markets (and/or implicit or explicit collusion).
There are much better ways to control allocation of available bandwidth than data caps, but they aren't as ridiculously profitable for the ISPs.
All-you-can-eat places do have rules. The food has to be put on a plate, you can only have one plate at a time, you can't share, you can't cherry-pick from the serving dish, you can't throw away too much of what you've taken before refilling. You can't fill up a 50 gallon bucket with "unlimited refill" soft drinks, and you can't stretch out one meal to cover the whole day.
I've never had anyone give me a problem when I ask for a 5th bowl if soup and 3rd salad on an "unlimited refill soup-salad lunch special". I've had no problems getting my 7th fried catfish refill or 6th order of unlimited shrimp. Usually I don't pig out so much, but sometimes I "save some room" for it.
Picking away the breading and throwing that away is violating the rules. If the rules weren't written down, they should have continued to serve her, and then written down the rules so it isn't a problem in the future.
ISPs don't pay for bits, they pay for bandwidth. They have a completely different business model than a restaurant. The analogy is inapt.
The resource they're selling is bandwidth, not bits. There are unlimited bits, crunch all you want, we'll make more.
Bandwidth isn't unlimited, and no one has ever sold "unlimited bandwidth".
There's no reason for putting a limit on the unlimited resource in order to control allocation of the limited resource, it's a very crude and ineffective method. When I didn't watch that Netflix movie at 3am Sunday morning, the ISP didn't save up those bits, so why should it affect how much it costs for the bits I'm using Wednesday morning at 2pm 3 weeks later? Throttling or charging more based on usage in a billing period simply doesn't make any sense.
Sell the bandwidth (say, by the Mbps), and at any particular point in time your connection from point A to point B will have a throttle of N% of your base rate. If you aren't trying to use more than that, you won't even see that there's a limit. N is determined based on current network congestion and your recent usage (e.g. last 15 minutes or something on that order). Very low recent usage (as a percentage of your base rate) would give a boost to your throttle level, e.g. 150% bonus. High congestion for a particular network segment would decrease N for any connection using that segment. I leave the algorithm for propogating congestion information as an exercise for the reader.
This has the effect of shifting usage to underutilized times/locations, which makes the network more efficient.
Such a method does need some transparency, with guarantees of percentage of time that you'll be able to get a certain percent of your base rate, perhaps as a function of time/day of week. If you can live with 5Mbps at peak usage, when the throttle might be at 60% for an hour, then you'd buy an 8-10Mbps plan, which might give you a short burst of 15-20Mbps even at peak, and 30Mbps sustained at 3am Sunday.
What do you care if someone is "wasting" bits when it doesn't impact anyone else? The actual marginal cost of transmitting data bits instead of idle/keepalive bits is a rounding error, the ONLY reason to be measuring data is to allocate the limited resource, which is bandwidth.
So ELIoT compiled is about 2.9MB, plus the C++ standard library (which is another 1.5MB or so) - this is compiled for MacOSX.
The code to create an interpreter and have it run a file is about 1KB, and the Tcl library is under 2MB.
I'd have to look more closely at ELIoT to see how comparable the two are in terms of capability.
Actually, sort of reminds me of Tcl. I wonder how it compares size- and speed-wise.
Tcll also has Tk available for anything with a display.
the Secretary shall
There are plenty of places in the law (in general) where references to things are somewhat indirect. If I'm operating on behalf of someone with power of attorney, there are regulations referring to the person I'm representing, but the they actually apply to me.
I see the wording of the above section of the ACA as being effectively setting up "an exchange established by the State" on behalf of the State when it won't do it for itself.
It also is beyond reasonable to believe that the if the intention was to create such a major difference in the case of the Secretary establishing the Exchange, it wouldn't have been explicit. There are no references to "Exchange established by the Secretary", there are no restrictions put on such Exchanges in section 1311. All of the references are to "an Exchange established by the State under section 1311 of the Patient Protection and Affordable Care Act" (6 of them exactly that, one "this section", one dropping "section").
If some of the other references don't include Exchanges established by the Secretary, then such Exchanges would have some serious deficiencies. If the intent was to severely cripple such Exchanges, why would they be established at all?
Not if the one time pad is much longer than one transaction and you only use part of it for each one.
The real problem is that the bank has to (securely) keep a different one time pad for each customer.
Most of the time you'll chose wrong, so you'd like to switch if only you had a clue of which one to switch to. Monty (because he already knows which one is the big prize) has conveniently given you a clue. Even though he's shown you one of the wrong doors, it's still true that your first choice was probably wrong. If your choice is probably wrong, and there's only one choice remaining, it's probably the right choice. Switch!
1, 3, 9, 27, 81
Base 3 with digits -1, 0, +1
-1 means on the same side as the object being weighed, +1 means on the opposite side. Can weigh up to 121.
Am I hired?
If I was taking someone's exclusivity, then I'd have some of it. Guess what I DON'T have if I copy something, with or without permission?
If I take something from you, then I have it and you don't (despite various idiomatic phrases, e.g. to take someone's virginity). If I haven't taken something from you, it isn't theft. If I copy something, I haven't taken anything. It may be copyright violation, but it isn't theft.
Check crontab entries trying to run an executable in
A trojan that's inside a bulk e-mailer program, yet. Almost funny.
I was programming in Pascal on a Lisa (dual boot to the Lisa command-line OS (Lisa Workshop) for development and MacOS for testing, occassionally booting to the Office environment). I bought it shortly before it came out as the MacXL, so had non-square pixels. I wasn't rich, and it wasn't any more expensive than a PC would have been with the same capacity.
The entire thing (Office 7/7, Workshop, MacWorks) plus system partitions for each was 10MB. System RAM was 1MB. I can compress and copy that whole system in a few seconds across a network now.
I'm sorry you were stuck with BASIC, but that wasn't exactly cutting edge in 1985, and there was lots of development in better environments.
A couple years later I started using Lightspeed/THINK C. No NEAR/FAR pointers thankfully. I avoided Intel stupidity for many years.
C really hasn't changed very much. The biggest change has been function prototypes. POSIX and ANSI certainly helped, especially with esoteric details of things like real-time and multi-threading/multi-processing, but that didn't enable much, just made it more portable. There are still plenty of incompatibilities despite all of that standardization (e.g. autoconf).
C++ as on object model was there. It was a poor model, and it still is. There are a lot more features now, but a lot of the "extra complexity" that modern hardware enables is spent dealing with the extra complexity C++ adds. I never used it, but maybe the world would be in a better place if THINK Object Pascal had caught on more.
CVS started out as shell scripts working with RCS. There were also plenty of other revision systems that had been around for a long time (eg NOS MODIFY). It's not that the concepts were unknown, just that the hardware simply didn't have the capacity and speed, and networking it all together was much slower and less available.
In the meantime, plenty of people were writing things in Pascal for the Mac. You had a resource compiler with resource files. You could write things in C, on a Unix system. You could build things with "make". Most of the software tools used to compile Linux and most of the current standard software was already in existence. There were source code control systems. There was X Windows. There was TeX. There was PostScript. There were a LOT of things that make up the majority of the software tools still in use today, and most are very little changed since then.
Sure, git is better than CVS. A large part of that is due to the constraints of the available hardware, you simply couldn't have done git in 1985 with available hardware.
The basis for Object Oriented Languages was well established, as was the basis for multi-threading (see Path Pascal, C++, Smalltalk).
What's been done since then is to take advantage of the massive increase in speed and storage available. Sure, there have been some incremental improvements to languages and utilities and development environments, but the impact that's had compared to the hardware improvements is fairly small.
The main advances in programming have been with encryption and compression. Everything else would have fast-forwarded within a few years if today's hardware had all of a sudden been made available back then.
Remember: use logout to logout.