Really? Let me try i...***Signal Lost***
Really? Let me try i...***Signal Lost***
I'm sure the overwhelming answer to this question will be "Yes", but I wonder if there are cases where implementing downgrading of capability in technology products is justified and even argued as necessary.
- What about a phone that only provides communications on a radio protocol that is going obsolete and would cost the carrier (and the customer) much more to support it? (In Canada, CDMA service ended on May 31st - would it have been so bad for customers to get new product before the shut off date?)
- What about processor technology that has issues that the manufacturer cannot fix but has software work arounds which are expensive to maintain (especially on older systems), causes problems on other devices in the system and causes an overall performance degradation? (Any apparent relationship to the Intel/ARM64 Spectre and Meltdown bugs is non-coincidental.)
Personally, I think Apple should be fined/jailed for dropping the operating performance of older iPhones in order to drive sales of the new devices - and, I would not label this "Planned Obsolescence" per se but damaging customer-owned product in order to generate new sales.
But, if the supplier can demonstrate that it is in the customer's best interest for strongly encourage them to replace their existing technology AND it is not price gouging the customers (ie the replacement product is made available at cost) should this be illegal?
How am I the bad guy here? I just explained the situation to you.
You're confusing two different situations.
If an employee of a privately owned bakery refused to back a cake for a gay couple, they can be fired.
The bakery can't refuse to provide products and services to a customer based on the owner's religious beliefs.
But was instantiated incorrectly.
The man is fucking up Google in the worst possible ways.
What tangible metric can you show that Google has been hurt by this?
As far as I call, this is a lot of smoke, noise but no fire.
Has Chrome usage gone down because of people being outraged over this? Are companies looking to alternative to AdWords/Analytics? Are sales of Android phones dropping? What about Google Home devices? How has the stock been affected?
I always felt that the fact that the shuttle was designed with technology that didn't meet the application requirements was the biggest issue. Don't forget that a good fraction of the protective tiles and the engines had to be removed/refurbished between flights (which was not part of the concept) - this added $400M to $1B (depending on your source) to each flight. I think the design amortizations costs became insignificant pretty quickly with that additional cost for each flight.
A recurring criticism of the shuttle was that the weight/drag/cost/complexity penalty of the wings made it an inefficient approach in many people's minds - they felt that the "wrapper", as you so eloquently put it, had to be as simple as possible to maximize payload and minimize costs - the refurbishment costs seemed to validate this approach.
Now, we have SpaceX where the design was made with the thought that a certain amount of payload would be held back to provide for fuel and hardware to return to the launch pad, allowing the booster to be reused. Many of the same criticisms made about the shuttle came back with the initial Falcon concept until SpaceX was returning and reusing boosters as well as providing significantly cheaper access to space. I'm looking forward to seeing Dream Catcher's first orbital flights to see if it can provide a cost-effective reusable lander.
So, to maybe articulate my thoughts more efficiently, the US should have stayed with the hardware developed for the Apollo moon missions until the technology to effectively reuse space hardware became available.
I chose 1975 as the numbers in the Time "Apollo and Skylab: Looking to the Future" (copyright was 1976) book were based on 1975 and there are additional numbers corroborating these in NASA's "Space Settlements A Design Study" (NASA SP-413), which is the 1976 Gerard O'Neill Space Colonization study, which also provide cost numbers for 1975 - and it was at the end of Apollo/Skylab era so that seemed appropriate looking forwards.
The cost of launching a Saturn V Apollo mission is NOT equal to the cost of launching a Saturn booster on another mission (which is why I went to such pains to try and isolate the booster costs so they can be compared to modern day boosters). The numbers I quoted are assuming development costs were amortized by the Apollo program (which I feel are defendable). The numbers quoted in the Wikipedia entry include development costs paid for during the Apollo years and don't reflect the fact that they should not be included in later missions.
"Elegantly executed Skylab"? Wow. I have never heard that description for it before - I can only think that you weren't around when it was damaged during launch, the various problems during the manned expeditions, including what was thought of as a ridiculously high per day cost to it ending up being regarded as uncontrolled space junk crashing to Earth. I didn't think anybody regards it as a shining example of NASA technology and know-how.
Why aren't you looking at Mir as a comparison point? I would think that is a much better example of a long term space outpost.
I'm not sure that you can consider STS launches to Gemini-Titan/Apollo-Saturn as an apples to apples comparison. Don't forget there were up to four orbiters available at the highest rate years and they were extensively rebuilt between missions. I don't know how that factors into your comparison.
The problem with history is that it is based on the goals of the person writing it and their sources. It's too easy to find conflicting perspectives and data that challenges "current wisdom" that we can rely on what we think we know and what we can find through basic Google/Wikipedia searches - or, for books written more than forty years ago.
Sorry, I have to challenge you on a number of things about your post and the assertions within it - maybe you can provide some links to the analysis that you read to help provide some facts.
I don't think it's fair comparing Skylab to the ISS as you're comparing a short term outpost to a long term station. Skylab was occupied for a total 171 days with 3 astronauts - 513 days in operation at a cost (in today's dollars) of approximately $10B ($2.2B in 1975). That works out to $19.5M/astronaut-day in today's dollars. The ISS has a cost (so far) of $150B but has been in operation for over 17 years - let's say during that time there were only three astronauts on board, it works out to $8M/astronaut-day or about 40% of Skylab's per operating day cost. The longer the ISS stays up in it's present configuration (and you expand the calculation to include the number of days its had more than three astronauts), that number will be significantly less and continue to fall.
Sorry, NASA budgets have never approached DOD budgets - Take a look at the US budget for 1967 in which the major investments in Apollo was taking place:
(http://federal-budget.insidegov.com/l/69/1967): "General Space, Science and Technology" (which I'm guessing is more than just NASA) is 7% of the budget while the DOD was 49%.
It's hard finding costs for Saturn boosters sans payloads, but I think you would find that their costs are very competitive compared to existing expendable launchers (as well as the space shuttle) and in the ballpark of the Falcon 9. What makes difficult to get apples-to-apples costs is that the Saturn V was not designed to deliver payloads into LEO - the third stage was used to achieve orbit as well as restarted to send the CM/SM/LM to the moon. Probably the best way to calculate costs per pound are to use the Saturn V first and second stage to put up Skylab as well as the Saturn IVB used to send the CM/SM to to Skylab.
The Skylab Saturn V first and second stage costs were $50M (in 1975 dollars) with a Skylab payload of 170,000 lb. which works out to $294/lb to LEO. The Saturn IVB which sent the CM/SM and consumables to Skylab cost $25M (in 1975 dollars) with a payload of 46,000 lb. which works out to $543/lb to LEO. I have a Time book on Apollo, from when I was a kid, in which the cost per pound for the Saturn V launch was stated to be $500/lb. - so these numbers seem reasonable. In today's dollars (using http://www.usinflationcalculat...), that's $1,347/lb for the Skylab Saturn V and $2,487/lb for the Saturn IVB. As a point of comparison, the Falcon 9 costs $1,240/lb. The Ariane 5, in its smallest/cheapest configuration is $4,700/lb.
The STS was a bad left turn for launchers and set the expectation that launch costs would be in the range of $10,000/lb or more. I think that was the real crime - the shuttle's costs got out of control very quickly and nothing was done to reign them in. If the decision was made to drop the STS and keep with Apollo technology (just like the Russians that continued working with their 1960s/1970s technology), which was proven, reliable and cheap compared to the resulting STS and expendable boosters costs, along with the same NASA budgets for space exploration, then I suspect a station of the ISS' capabilities could have been put up by the late 1970s as well as maybe an outpost on the moon by 2001 - and we would have avoided the long drought in government sponsored manned space exploration.
Maybe it's a Canadian thing but I wouldn't consider this until I could get an unlimited bandwidth plan.
I'd only use the browser minimally except when I had WiFi access which means I would use it the same as any other laptop.
Maybe Google or Microsoft could take on the bit providers here in Canada (Bell, Rogers & Telus) and open up the market(s) for this type of device.
Thanx for the link - it looks like there is some performance impact on the ARM64s. Noted here: http://lists.infradead.org/pip...
Never think what you are using is perfect.
Just because ARM processors don't have this security bug it doesn't mean that there aren't any Broadcom ARM processor hardware (or its kernel) security issues lurking out there that are as bad or worse.
Modern electronics as a whole is pointless.
When the first STN displays came out, there were a lot of issues with non-working and marginal pixels. How often do you see modern phone or TV displays with *any* defective pixels? I don't know if you're old enough to remember TV sets with CRTs - but you'd go to a store and see a wall of them, all displaying somewhat different colours and brightness (even between the same model). A big reason why they went away was because LCDs provided much better colour management at a lower manufacturing cost.
If you think 100 million "light bulbs" or LEDs, which are diodes, is an issue from a failure standpoint what do you think about an i7 processor which has over 700 Million of more complex devices using the basically same technology? What about a 128GBit DDR4 chip?
Back when I did memory testing, two of the things we discovered was that:
a) memory chips are actually analog devices made up of arrays of capacitors with current "gates" (which have PN junctions, like a diode, built in). Which each capacitor and gate having different electrical characteristics.
b) the electrical parameters of each device changes over time.
There was a lot of work done to ensure that these devices work reliably for years within spec and, from the perspective of the user, they were digital devices - why would you think that the same approach wouldn't be done for OLEDs with the end result being a technology that works when required for years on end and provide (moving) images that are superior (in terms of size, density, colour reproduction, black levels and cost).
This is an interesting study, but I don't know if the results can be extrapolated to include closed source software.
My problem with this is that I don't see any evidence of:
a) Projects in the study have a published project plan with somebody managing it at a high level (I would think the Linux Kernel could be thought of as having a plan with strong central management ). I tend to believe that projects in which multiple individuals (with varying levels of understanding of the software, the app's background and issues experienced during development) would be at a much lower quality level than something managed by a strong, continuous team - this doesn't seem to be a consideration when I RFTA (popularity of projects seems to be a bigger issue).
b) Different development tools used by different developers. In terms of the C/C++ typing issues, Windows software developed and built in Visual Studio, Eclipse Text Editor with MinGW or something like Komodo Edit with Cygwin and user written make files will identify different typing issues and may generate code that works differently, especially in regards to identifying and handling typing issues. I would like to know how many bug fixes are the result of something that isn't flagged and works fine on VS and doesn't work when built in MinGW, leading to a fix.
b.1) I'm not 100% sure of the methodology used in this study, but wouldn't a file that originally had tabs for indentation that an editor automatically changes it into spaces be misidentified as a "fix" if it's uploaded back into the repository? This is a combination of b) and c).
c) Different coding styles. I know of several Open Source projects in which a developer has re-formatted code simply because they don't think it's in the "correct" style and they have difficulty reading it resulting in them changing it so they can follow it better. To be fair, I'm sure a lot of us have done that because some people have very different and strongly felt ideas about how code should be formatted.
d) Lack of formal testing methodologies. I don't think many Open Source projects have strong, automated regression testing processes and methodologies before allowing a new release.
e) Difference in functional use of different languages. I would think that methods written in C, C++ and Objective C would be providing more low-level functionality than Clojure, Haskell or Scala. Ruby probably fits somewhere between the two groups.
The corporatization of comics along with the yearly "big event" (bigger than the year before) in which heros, die, are reborn, lose powers (and maybe limbs), gain powers (and maybe limbs), are transported to alternative realms and the Earth, humanity or the universe is fundamentally changed in some way. These big events seem to be set up simply for having a movie that people can look forwards to down the line. Over the past 15 years or so, there's been a real loss of character driven stories and arcs which are what made comics great in the first place. I know Disney/Warner won't give up Marvel/DC as they drive movie and TV profits which means that they drive the comics to create properties/stories for these mediums.
So, where are the great independents that can drive stories away from the corporate oligarchs? There really needs to be some new life/blood brought in.
FORTUNE'S FUN FACTS TO KNOW AND TELL: #44 Zebras are colored with dark stripes on a light background.