Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror

Comment Re:depends on your definition of "architecture" (Score 1) 317

I had similar experiences with EA, but still think it can work.

For decades, I worked EA from the perspective of Fullard's first
meaning, but had to deal with many "architects" who only understood
the second meaning. Very frustrating.

Nevertheless, if you actually do EA, you can decipher (some of) the
vast evolved-over-decades human-and-computer systems running the
world's largest high-tech firms. Management may not be willing to
fund cleanup once you identify technical debt/ redundancies/
discrepancies/ gaps, but at least they will know there is an option to
ever-growing (and expensive) "unnecessary complexity", and how to
distinguish that from "necessary complexity".

In doing real EA, I found implementation-independent, non-redundant
models (aka conceptual data models) of the firm's state memory were
critical. They became the Rosetta Stone explaining how all the
"processes" (manual and automated) hung together. I used many
different modeling techniques and tools over the years and never found
one adequately rich for the task. A critical problem was that
change-management for the models and the manual-and-automated systems
were politically controlled in communities of integration, each with
its own timetable, and very loose understanding of responsibilities to
downstream systems.

An any rate, given the conceptual data model, you identify which
systems are authorities for each data item. Then (politically) decide
which *should* be the authorities. Fund the authorities for their data integrity
and their interfaces to the rest of the firm, and defund (ah politics) the
others. These decisions then drive the development of manual and
automated processes to carry out the design.

As for Agile: EA provides configuration-managed data authorities and
interfaces. If an Agile project operates within
that safe sandbox, it can work. If an Agile project is left to its
own direction you will get customer buy-in, T-shirts will be awarded,
careers will be enhanced, and the next generation of IT workers will
have to deal with the mess.

Comment Re:Biggest problem - the lack of comments! (Score 1) 317

I agree we need commentary to explain the code. I don't think in-line and block comments are the whole story. I've used comments, literate programming, and offline diaries. Most effective for me is

a) Keep a running diary of the day's thoughts, experiments, and decisions. Along the way explain when you had to reverse a decision and why. This diary (aka "design history") goes with the project as a formal part of the documentation suite.

b) Write up the project's application architecture (hopefully in the context of an enterprise architecture). Include any data and process models that drove the design, sources of authoritative interfaces from other systems, which change boards control what aspects, algorithmic choices (e.g. RR vs LALR parsing). Include a section on how to run the test suite, and how to add-and-test a new feature.

c) Comments in the code. Start with a block which can be auto-extracted to a man page, and revision history. Then identify major sections and refer to the architecture as needed for full discussion. I don't do much line-by-line commenting, except when the algorithm itself is tricky (e.g., how an fft inverts bit order).

Comment Re:Bloggers Gotta Blog (Score 1) 317

Yes,

30 years as an enterprise architect, repeatedly deciphering crufty system ensembles and recommending cleanups, taking into account the corner cases which led to "necessary complexity" in the first place. But of course the price tag was almost always too high, and the payoff too far off for suits' career paths. So we tack on yet another workaround, and tech debt ("unnecessary complexity") accumulates.

Then every 5-10 years, the suits are bedazzled with a sales pitch: Your systems are too expensive because they are too complex. Your tech people are too dumb to know this. Buy our product and all will be well. We amortize our meticulous work over many customers.

And so we buy the product family. It doesn't do the job. The suits are embarrassed and demand IT make it work. And a new cycle of workarounds is born.

Comment Re:The Actual Danger. EE and CompSci (Score 1) 526

Agreed: Programming 'hello world' is easy (well actually it can be tough on a new platform and new language). Programming parallel for petabyte data streams not so much.

Comments on comparing EE and CS:

Personal context: My dad was an EE, did systems engineering on missile programs, and used computers to improve security systems (the kind with fences and machine guns). My mom was an ME and librarian and used computers to automate library catalog systems. My sister was BS CompSci and MBA but made her mark as a domain expert on union contract provisions. I was first a biologist, then MBA and accountant, then BS EE and MS CompSci. After the usual sysadmin/DBA/programer/analyst roles, I made my mark as an enterprise systems architect.

What do these experiences have in common? We all used computing to get work done. Domain insights were critical -- knowing what needed doing before deciding if it is best done with a 3-ring binder or a mobile app or a supercomputer (much less which programming language or algorithm to use).

What do engineering and compsci provide:? Engineering provides a solid math foundation and habits of considering fault modes and error budgets. Also provides domain knowledge: If you are writing Lisp code to design airfoils, you need some background in aeronautics. Compsci helps with getting different computing systems and canned algorithms to talk with one another -- thus parsing, code generators, data analysis, DB design. Compsci also has treasure troves of algorithms to automate boring, error-prone human effort.

Where are engineering and compsci different?

1. Production volume. Engineers typically are designing for hundreds or millions of copies. It is therefore worthwhile thinking through the details, building testbeds, checking edge cases, etc. Much of computing is one-off -- solve today's problem and maybe fix an old program that the company relies on but can't understand. People working on commercial software (software sold in millions of copies) have engineering incentives -- though the ones I encountered were technical basket cases driven by marketing to clueless VPs.

2. Obviousness of success.
a) Engr: A bridge stands or falls. A plane flies or crashes. A heart value works or not. You can have specs, and test the result to those specs. The physical nature of the construction means if the you are close on the spec, it might sort of work.

b) CompSci: The app is a trade secret. so who knows if the database design makes sense or if the code actually uses the DB? Ok, try testing to specs. But there no tie to natural laws, so no gradual failure. Success on 100 tests is no evidence that test 101 will not fail catastrophically, or something you didn't think to test will destroy the known universe. Therefore CompSci depends heavily on correct-by-design, with code reviews to see if the code actually does the design.

2. Vulnerability to remote attacks.

a) Non-automated engineered things are subject to local threats like sledgehammers. They cannot be hacked by somebody in a dark room 3000 miles away.

b) ANYTHING automated (from supercomputers to edge sensors to that picture frame in your bedroom) is attached to the internet in some way and can be manipulated by remote ones and zeroes. Even air-gapped systems built from source have to get their C compilers from *somewhere* (or did you toggle in the assembler, and work up from there?). And even C compilers can be hacked (Turing lecture). Since we humans can't make sense of the ones and zeroes of the data or the code, we have to rely on rock-solid chain-of-evidence from known good start-point to end use. That requires thinking as cleverly as the best/darkest minds on earth. Oh wait, this is why I retired and took up classical guitar...

Comment Re:Merger in name only (Score 4, Interesting) 132

The McDonnell maneuver was done by a subset of Jack Welch's gang at GE -- they destroyed major companies including McDonnell and Boeing as they worked their way through the Fortune 500. The rank and file engineers at 3M, McDonnell, and Boeing were nearly wiped out as the financiers used cost-cutting-based-inflated-stock-prices to pump up their bonuses. Technically competent Boeing managers who survived did so by drinking the downsizing kool-aid.

Is Boeing dead? Despite the financiers best efforts there are still competent people (systems engr, design engr, mfg, testing, customer service, et al). Given enlightened mgmt they could pull the company out of a nose dive:
    a) Wars (thus weapon systems) aren't going away soon.
    b) If "Buy American" includes commercial aircraft then Boeing is the main player
    c) Boeing's disastrous flirtation with "offloading/outsourcing" shows need to rebuild vertical integration.

Comment Re:Paper is king (Score 1) 147

Thank you. I came upon the problems with e-voting in 2004, and it was a old issue even then. For some historical links and discussion: https://www.seanet.com/~hgg9140/politics/evote/index.html

Remember, trillions of dollars change hands based on elections. We cannot afford systems which are hackable with a measly billion dollars investment in hardware, software, and bribes. We deal with bribes by having competing factions provide poll watchers. Since they aren't likely to be technical wizards, we MUST have technically simple voting/counting/tallying mechanisms if we are to trust the results.

The National Association of Secretaries of State (NASS) used to be for voting machines (after all, voting machine companies sponsor NASS events), but after briefings from FBI NASS now realizes paper-ballot is the way to go. [Personally I believe any politician who has not fought for all-paper knows the other approaches are hackable and in fact has hired people to hack them in his/her favor. There is just no excuse for voting machines or even worse online voting.]

Current state-of-the-art is

a) Paper ballots with outer signature envelope and barcode, inner security envelope, ballot with tearoff matching barcode strip. You retain the barcode strip, and can check on-line if/when the outer envelope has been received and processed.

b) Ballots are mark-sense. You fill in the ballot bubbles in ink, then insert in inner and outer envelopes and mail-in or deposit in official dropboxes.

c) Upon receipt, signatures are matched and healed as needed, then security envelopes are separated so that they can't be matched back to the outer envelope.

d) Ballots are taken from the security envelopes and machine-counted with as dumb a machine as you can get to do the job. No WiFI, no Internet, no proprietary closed software, no chips from untrusted fabs, no NDA's. Updating/testing the hardware/software is done under watchful eyes of competing technologists.

e) Counts are tallied to precinct, district, county, state etc.. If a recount is needed, that is done by hand, and tallies are redone by hand. Ballots of course are kept under lock-and-key.

f) Here is the real winner: After EVERY election, not just close ones, precincts are chosen at random for hand counting. Entire precincts are used, not samplings from within a precinct. If ANY precinct has discrepancy between machine tally and hand tally, then we citizens all get seriously intense in determining why.

Comment Re:cgi-bins (Score 1) 85

Yep, all Perl in the early days. There were "hello world" examples for CGI in an assortment of languages (including Forth?) but real life was Perl and CGI.pm. .

PHP: There were and are some useful apps in PHP. But I don't trust it or them. I maintained a Fortune 100's OSS toolkit for over a decade. Was responsible for getting security upgrades, building them from source, and getting them distributed to in-house developers. PHP and all things developed in PHP were regulars on the security agenda. Policy was to not develop in PHP, and if you used an off-the-shelf app or library, link to it via a service from some other language.

Perl: Perl has always had serious power under the somewhat cryptic syntax. When I converted from Perl to Python, I made a python rendition of Perl semantics and cgi.pm. Eventually gave up on Perl semantics and went fully "pythonistic". A couple of experiences pushing me away from Perl:

1. I ran a code review of a Perl cgi app. It was the devleoper's first Perl app (and first web app) so there was no discernable style across the 1000 or so LOC. It took us a couple of weeks to get through the app and figure out what it was doing. Management didn't want to pay for us to re-write it in a maintainable style. A decade later it was still being used (though probably not being maintained).

2. Once a buddy and I (both fairly experienced in Lisp/Prolog/REs/CFG's et al) were asked to debug a Perl app doing ad hoc text parsing. We found a line of code that would not work right. We kept permuting the token order of the line until it worked, and then puzzled out why it worked. Took all afternoon. One line.

Comment Re:Organization -- server queue theory (Score 1) 255

"An overflowing inbox is a sign of difficulty with this skill." Maybe in your world, but not for many people.

In many (most?) cases this is about good old server-queue theory, with you (the reader of email) being the server. When the workforce is cut over and over, the remaining staff get what used to be many people's workloads. Eventually they can't keep up.

In my case: Before I retired I was working with several (more than 5) wholly distinct intra-coprorate communities -- each with its own email chains and questions and initiatives. I was generally a key participant in each community, meaning my input was required. It took 2-3 hrs/day to work through them on a good day.

But if I was out on vacation or sick or traveling or at training or at a conference, then the email stacked up. When I got back, the backlog was in hundreds or thousands. The normal daily load did not stop to wait for me to catch up. I might throw in a couple more hours for a few days to catch up, but I also needed think time to plan the next project, attend meetings, deal with personnel issues, etc.

So despite best intentions, you fall behind. You can't even scan it to find what is critical. Certainly not enough time to send apology notes. Eventually you give up on that entire tranche -- just delete the hundreds or thousands of lost causes and start fresh. Why delete? In my context if anyone needed my input from that tranche, they would resend it and call or stop by my office to make sure I got to it.

Comment Re:One problem: no normative definition of "Agile" (Score 2) 445

Why is Agile a moving target? I think it goes like this:

The original vision was great: Many projects are hard because they require high-quality human interfaces. Humans are lousy at pre-specifying what they want for interfaces, so waterfall specs are slow and useless. The better approach is to just prototype stuff and have the user try it out. If your development cycle is vastly faster than users review cycle, you do real code instead of prototypes. Continue to grow the app as users' understanding of their needs evolves.

What's missing? Agile is all about doing one new app very well/quickly. Often (in my experience ) the app shouldn't have been done in the first place -- it was a workaround to a workaround, adding one more plop to the enterprise steaming heap. Instead we need to do systems (portfolio-wide) architecture at the start of a project to determine business case go/no-go, impact to data model, impact to data allocation to logical resources (which apps handle which data?), and external interfaces. Then use Agile for evolution of the user interface.

Isn't architrecrture too slow -- defeating the purpose of Agile? On a small project the architecture phase is 2 days. On a serious project it is maybe 2 weeks. Anyone doing 2 months of architecture either a) doesn't know what they are doing, or b) is dealing with a topic too loose to architect much less code. Assuming the project proceeds, revisit the architecture when the data model, data allocation, or external interfaces change

So why does Agile itself change? Agile gets in trouble when its true believers claim it is the revolutionary solution to everything software engineering. And then has to jump and jig to handle what it calls special cases. “Uh, we meant of course to include interface control governance; we just forgot to mention it". Calling something "Agile Enrterprise Architecture" doesn't make it so.

Comment The ideal replacement is not free-as-in-beer. (Score 1) 216

As others have noted, it costs money to run servers, apply patches, and man help desks. Any attempt at getting someone else to pay (sponsors, states, ads) leads right back to trouble. To be trusted, it will have to be fully paid by user-fees, iron-clad no-commercialization, and have user voting rights on changes to the bylaws.

So, how much would it cost? Given community-supported OSS software, cloud servers, and volunteer help communities, it wouldn't take much per person "at scale". If you assume $5/mo for 10M users (a reasonable start toward world domination), you have $600M/yr. You can run a lot of servers on that.

But how to get people to move, given the network effect of Facebook? I mostly leave it to others to solve -- I don't use Facebook precisely because I knew the business model from the start. But I would suggest:

a) Offer Facebook a free pass on the 2011 settlement ($40,000 per violation times 87M users) in exchange for open copyright on Facebook's look-and-feel.

b) Ask the Parkland students to define a new-and-better Facebook replacement (i.e, copyright issues). They in turn would communicate with their vast Internet-native community. Then us old(er) guys could code it up.

c) Write it in Python, with optimizations where needed in C. I don't trust democracy to Java, Javascript, and C++.

Comment Re: Partisanship and Censorship From the Ground Up (Score 3, Interesting) 216

At best, this is an historical sidebar. At worst, a ploy to take us off topic. I'll respond once and then drop out.

I agree the 2nd Amendment was there originally to protect the others, or worst case it was "the reset button on the constitution". Reading the letters and essays by Federalists and Anti-Federalists makes clear this was the intent. Military technology at the time made it sensible.

However, successful rebellion means "We won", not "We killed a lot of people and then we were killed". For a rebellion, you are putting yourself on the line, not some proxy nation/tribe/oppressed people. "Our lives, our fortunes, our sacred honor" et al. The Gatling gun (Civil War), the Maxim (WW I), the tank (WW II), the Apache attack helicopter (Vietnam), and the AI-controlled drone (Middle East) pretty much obliterated the value of rifles (automatic or not) for purposes of successful rebellion against the US Govt. Furthermore, for urban rebellion, guns make enough noise to be triangulated, and any human holding one will be caught on video and identified. Any serious rebel is thinking Cambridge Analytica-style Big Data, weaponized drone swarms, and infowar campaigns to convince a few million peiople to risk death for the cause. [Ohhh, so that is what the Koch brothers and Mercers are up to... :-)]

It becomes pretty clear that informed (not mal-informed) voting is vastly more efficient than guns for changing the nation's path. For that we need a sense of community, not wedge issues like gun control. Which brings us back to the need for a non-commercial free-as-in-speech social media mechanism.

So, while I reload carefully enough to shoot "ragged clover leafs" with my rifle, and can watch the rechamber with my pistol, all my energy these days goes into makerspaces, meetups, and playing music. "Make love not war".

Comment Re:This is the attitude of many security experts (Score 1) 219

Way before the late 2000's. In the early 2000's several of us were trying to explain these issues to the county auditor (who controls election mechanisms), and used this site to collect talking points:
http://www.seanet.com/~hgg9140/politics/evote/index.html

When we were researching, we found the main issues were already old news by then.

Even then we were advocating paper ballots and manual processing of those ballots. That it is still not solved nationwide is due to well-funded politics, not technical incompetence: "You can wake a man who is sleeping. You cannot wake a man who is pretending to be alseep."

Comment Drivers for polymaths? (Score 1) 212

What drives a polymath?

As a practicing dilettante I have at times drifted into polymath territory (see: http://www.seanet.com/~hgg9140/ ) NOTE: That website deliberately does not cover topics I actually did for a living (sysadmin, developer, DBA, OSS champion, systems modeler and architect, etc.) Also haven't yet written up cartridge reloading, bow hunting, metal working, woodworking, boat building, and Italian cooking.

On that basis I think the key ingredients are:

a) Unquenchable curiosity and naivete -- if some human can do it/think it, why not me?

b) Acceptance that getting there takes work -- you have to do the homework and live the experiences. I'm still struggling with Reiman curvature, so I can read about relativistic fluid dynamics. Also bogged down on homeric greek on the way to reading The Odyssey.

c) Thus willingness to be the most ignorant/incompetent guy in the room, but with a grim resolve to "catch up with the others" It is emotionally painful, but you have to do this topic after topic, year after year, decade after decade.

d) Experience in the act of learning -- where to get the best books and youtube videos, when to splurge on expensive equipment, when to take classes or entire degree programs, etc.

e) Some sense of what "success" means. In my case, I want to be well-grounded -- so I can later learn from true masters and can recognize BS when it showed up.

f) Willingness to share what you have learned. You don't have to be a full master to help other raw beginners get going.

Comment Re:No standards at all (Score 1) 640

Microsoft holds some of the OPENGL patents So now isn't Ubuntu in danger of "submarine" patent cases?
http://www.zdnet.com/news/microsoft-claim-shakes-graphics-world/124000

Usually these cases don't surface until the transition is well locked-in.

Comment Re:Dangerous claim (Score 1) 675

I agree with the sentiments, but that's not the reality. When I did the OSS m3na package (Modula3 Numerical Analysis package), I started with a clean-room implementation of the function calls in "Numerical Recipes". Their lawyers claimed even using the same function names and parameter lists was copyright infringement. Being a lowly individual working out of my basement, I restarted the package. (Actually I think it turned out much better than NR, because I restructured around "groups" and their operations rather than a random collection of functions)

Since then I've learned enough about US and International law to deeply distrust all legal actions surrounding "intellectual property" . Not to get too class-war about it, but if the big boys claim the law says X, then by gollies, it is X and there will be be armed enforcers to help you understand.

Slashdot Top Deals

Nobody's gonna believe that computers are intelligent until they start coming in late and lying about it.

Working...