Regarding bridges/roads and tolls: One of the rationales for keeping tolls on roads and bridges is to collect money to maintain the roads/bridges once they are paid off. I've seen this reasoning used in three states, and in all cases the tolls were increased "because the cost of maintaining the roads keeps going up." In Cook County IL, the real reason the tolls were kept on is because sub-standard work had to be torn up and re-done -- multiple times. The reason the work was sub-standard is left as an exercise to the reader.
I've never lived in a state where the tolls were retired and the booths torn down.
Dig a little deeper, and you find out that the governments appreciate how tolls free up general revenue for other spending.
The introduction of a repeater into a cell system means that the engineering of the cell boundaries can be affected. Now, for boosters that are used in building that shield the RF, there is little engineering that needs to be done -- you are essentially extending the antenna outside the shield. (And you can get repeater antennas without boosters that do the same job, and I suspect they are *not* covered by this regulation.)
When you have an active repeater, that means the cell signals from the provider can be relayed as well as the signals from your cell phone. With microcell design, this can play hob with the clearances, so that a phone will see two cell site courtesy of your repeater.
I'm not an expert on cell systems, but I remember some of the arguments used to keep people from using cell phones from airplanes.
Way back when, while attending the University of Illinois (major: Computing Engineering), I wanted to take a junior-level CS course as a freshman. (CS306, to be exact, taught by Gillies.) In order to take the course, I had to satisfy the prerequisites. So I took the exams for the FORTRAN and ASSEMBLER courses. My advisor encouraged me to blow through the two lower courses: "I don't want you getting bored." Both exams were a piece of cake because I had been programming in both languages, plus PL/I, for two years in high school. (Funny story: for the ASSEMBLER course, the final exam was prepared by the professor, and the teaching assistants took the test with the students, to help set the curve. I missed one question, the TAs missed the same question plus one additional question, so I ended up setting the curve. The other members of the class were not amused.)
I was accepted into the CS306 class, and ended up teaching the first two weeks, because I was the only person in the classroom -- the teaching assistants included -- who knew PL/I cold, and PL/I was the languages used for the machine simulator. I also helped debug the simulator. I also was a "group of one" (the standard was to have three-people teams for the term project) because the professor thought that anyone who was on my team would not benefit. So I ran solo. And freely consulted to the other teams, with the professor's blessing and strict limitations on the kind of help I could provide.
(Calculus proved to be my downfall. Long story. Even the Dean of Engineering became involved, but the damage had been done. After working for a corporation for two years, I used the corporate tuition reimbursement program and went to junior college -- and aced all four calculus courses, all the way through Differential Equations. I just needed the right preparation.)
I'm sorry, I have a real problem with the underlying assumptions your answer makes about the process. There should not be a high wall between groups. Developers should not install it, but that doesn't mean the developers are not there when QA or Configuration Managment or whoever installs it. As a long-time developer, I learned more from the struggling of other people with my software than all the scaffolding and test-bedding done in isolation. Back when I was doing embedded programming, I made it a point to spend time in System Test to see how my software was being used...and misused. Next to me were the Documentation people, watching out for mistakes or head-scratching -- between us, we would see the holes that needed to be plugged so that the downstream processes would go more smoothly. And I would go out into the field, to customer sites, from time to time, particularly if a customer was reporting problems. This was particularly true of first launches, because sometimes the devils aren't seen until the customer hits them.
This was true for newspaper composition systems, newpaper press controls, bank check processing systems, key-entry systems, even a technical support group application.
I relate this story about the fallacy of compartmentalization: the General Manager gathers all the employees one Friday. Everyone had just been paid, the weeklies and the monthies. GM: "I'm not happy with the 'us versus them' attitude that seems to permeate this company. It's affecting our ability to get product customers want into their hands, so we all can get paid. So, tell you what: everyone pull out their paychecks, and fold them so the signature at the bottom is visable. See? All are signed by the same person. That should tell you something: that we should be working for the same goals, so we all continue to get paid." The change in that company was dramatic: instead of silos, it was more like an open-plan office writ large, with people talking with one another. One side benefit: sales stopped selling what we didn't have, and PARTICIPATED in the creation of new products. That company went from one step from closing it doors to being a booming business. My stock went from $1 to $65 a share. In six months. And the company was in the mid-West, not Silicon Valley
The softball team started doing a lot better, too.
- Build a file from the corpus with three fields: size (with leading zeros), hash of the first bytes (I used 32 kilobytes of CRC-16, using a really fast implementation taking from a comm program), and the file name
- Sort the resulting file
- Filter out the entries have unique size/CRC pairs; declare as duplicate any sets of file Based on the first filtered file, build a second file with three fields: size (with leading zeros), hash of the entire file (I used MD5) and the file name
- Sort the resulting file
- Filter out the entries having unique size/MD5 pairs; declare as duplicate any sets of file Compare the remaining sets of potentially duplicate files byte by byte.
Got really large files in your corpus? Then consider an intermediate step where you hash a larger and different portion of the file. For something different, you could hash the last bytes of the file so you don't end up duplicating work. Say a megabyte. In my case, I didn't need the extra pass because of the data involved. My corpus was on magnetic tape, so I couldn't just compare files byte by byte, because I would have had to load them somewhere first to do the compare. So I had to identify the potential duplicates *first*.
Link to Original Source