Follow Slashdot stories on Twitter


Forgot your password?
DEAL: For $25 - Add A Second Phone Number To Your Smartphone for life! Use promo code SLASHDOT25. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. Check out the new SourceForge HTML5 Internet speed test! ×

Comment Re: COBOL isn't hard to learn (Score 1) 371

Hold up. Now I've worked COBOL. There's the COBOL that they teach you and then there's the stuff you're going to see out in the field. There is a huge difference and it'll cost any company time for any new person to get up to speed.

Each company has their own way they like to do subfiles, their own standard for indicators, their ways of dealing with file access, and so on. The language per se isn't that hard it's just everything else about dealing with COBOL in a real system that's the time muncher.

Additionally you'll run into things where this group of programs was written when file indicators used 60-69 and this group it's indicators 80-89, and that group they use actual variables, oh and this handful of groups was before the compiler got table (COBOL array) support. Oh and then that's not getting into the literal half dozen ways to interact with a display file. Nor service programs etc.

COBOL is easy to learn, but knowing the syntax of the language just doesn't prepare you for how wild west programming used to be and code from the 60s tends to be more along the lines of, "whatever solution I came up with that day to solve this one problem is what I went with." A huge lacking in coding standards and thus a lot of inconstancy not just in programs built the same year, but literally if the developer slept between coding sessions, you could have different styles in a single program from the same person. It was really an undisciplined style of getting things done.

Comment Re: Why not? (Score 5, Interesting) 371

Holy shit dude. I'm a vet C++ programmer and the last three years mobile developer for warehousing company. However about a year before my current job, I worked in an exclusive AS400 shop. RPGLE, COBOL, CL, you name, it was still using the stuff from back in the day.

The system was incredibly fragile and had insanely complicated builds with the labyrinth of binding directories, dependencies, the constant service program signature violations. Customers demanded new functionality and the system just had trouble keeping up. Old programmers would code crap like RPG still lacked arrays with abusing subfiles. They'd swear by using numeric indicators versus the new fancy types. Yeah, because IN73 being on tells me that I totally need to refresh the data on the screen. Eventually the company fired all of the RPG/COBOL programmers because they could never keep up with demand. They were short order replaced by an off the shelf solution and any new work was contacted out. I stayed on a few months more because I understood EDI and they needed someone to get that all setup on the new solution. Oh and it was Java based.

Point though, you sound exactly like those old guys. OO over complicates crap and you don't need all that fancy stuff, blah blah blah. I work and coded DDS, SQL, RPG, and COBOL with those people for five years and I knew their mentality was just setting them up for obsolescence.

The tech industry is brutal man and you've got to be a person keenly adept to rapidly adapt or you'll find yourself quickly no longer employed and lacking serious skills to find your next job. Two, besides myself, out of our group of nine found another job. Those two for jobs doing AS400 maintenance, took a pay cut for it, and RPGLE programmer with the State Government, smaller hit to the paycheck than the first guy. The rest of the group have nothing but their retirement savings to dip into. We all lost touch over the years but last I heard two or three more for jobs finally but had to eat the cost of moving elsewhere.

Seriously, the only thing that kept me above water was that I had skills in C++ and Java. And since then I've picked up containers with Linux and node.js plus Python and some big data (Hadoop and shit) just to what I assume is stay current.

That thinking of yours isn't a good kind in this industry. But I get it, when I worked with the AS400 folk, I had just turned 30 and so I was seen as the young whipper snapper etc. I had just left my last job of several years doing C++ and every gray beard (literally) there thought I was there to "think outside the box" and "shift paradigms" and really I was just there to mostly get a paycheck. Learning RPGLE and COBOL was fun but man those old guys they scoffed at any suggestion to get off a green screen or to make their monolith more modular. They would literally say, "stupid hipster just trying to make everything harder than it needs to be." And let me tell you, I didn't dress like a hipster.

Anyway, I know, way more memory lane than anyone asked for, but seriously, those guys' thinking ultimately led to their firing. I've worked in a few development teams but not a lot so can't say with a lot of experience, but my gut tells me that had they been just a wee more open to change and newer technologies, they might still be working at that place.

Comment Re:Finally (Score 1) 366

Personally, I am still trying to figure out what real problem it solves.

I see systemd as the perfect compliment to Linux cgroups. It makes managing containers a lot better than how say docker does things. But if you like docker or systemd, I think we can all agree that the way containers are done now, versus sysvinit + chroot methods, is vastly better. However, if sticking with the old sysv stuff helps anyone sleep, then go for it. However, I remember doing the whole httpd inside a chroot and remember the headache it caused if one of your boxes out of a 100 was hung up and trying to hunt it down... systemd or docker make managing thousands of systems a whole lot easier.

Comment Re:But is Wayland better? (Score 5, Informative) 227

I'm going to start where a lot of people don't usually start. The actual people who maintain X11. They hate the code base, they just simply don't want to deal with the tangled mess that it is. Seriously go look at a dependency graph of just the xserver or a slightly higher level view of the state of things. Point, no one wants to maintain this mess. Anyone feeling frisky in doing so is strongly encouraged to do so, but the majority of developers who have worked on this in the heyday have long since left the building. The sheer pool size of people working on X is low and fresh blood in the development pool is best described as anemic. Fewer developers working on one project and more on another project pretty much seals the deal on the direction. Arguments of X being better falls on non-existent ears. You want to talk to an X developer? Head over to Wayland, that's where you'll find a lot of them.

Next in line is that X is ineffective at one of the things that it's suppose to do, draw stuff on your screen. (Not even going to touch multi-monitor, sleep, touch input, etc all which have had extensive hacking to get it working and thus resulting in patches of code with serious bus factor one issues.) X11 lacks pretty much everything we take for granted in a modern GUI. Want anti-alias text? Well X11 doesn't do that. Want the concept of an alpha-channel? Not present in X11. Quite literally, X11 does nothing in the way of anything that say KDE, GNOME, Unity, Cinnamon, or whoever wants. Instead, your chosen toolkit is using a library that builds in memory the bits that need to be drawn and if your xserver supports RENDER, your toolkit just gives a stream of bits over to X11 via that method, and X just forwards it on to either the card or to a compositor, which by the way X11 doesn't have a concept of, hence the reason you need one external to the xserver. At some point someone said, if every toolkit is just building bits by themselves and then having X forward it on, why not just cut out the middle man? Why have this extra layer that we keep having to build ad-hoc extensions for? (RENDER, XDamage, RANDR, XFixes **yes literally an extension to fix stuff but mostlly to turn a lot of old X11 stuff off.) All of these wonderful extensions are in reality short circuiting old cruft in a code-ugly fashion. Add in new complexities being added to video cards, functionality that's difficult to eventually get working, and yeah everyone is ready to put the old girl out to pasture. X11's lack of so many things is a roadblock to tapping your card's fully ability, which is why most of the time we're happily ignorant of all of the by-passing of huge parts of the core of an xserver, with the prolific set of extensions that come automatically built into your distro. (which is why a lot of folks never notice and just think that this is the way X was built, but nothing further from the truth could be said. Try building an xserver from source.)

Now let me move on to your points

Network transparency. X11 has it. Wayland doesn't.

If you are using X11 over ssh, you aren't using X11's network transparency. What you are doing is streaming pixels across ssh, but you aren't using anything remotely looking like core X11 protocol. On the remote side, Cario, Qt, Mutter, or someone is drawing pixels and then that gets wrapped into a generic X11 package and sent to you to open up and then have your computer decide what to do with the newly received pixels. There's no commands like "Window A is currently at location x,y. It has a button at rx, ry relative to the top-left corner of the parent widget, blah blah blah." Nope, it's just "here's pixel one, here's pixel two, here's pixel three..." There's no distinction in X between a button in an application running on a remote server and a picture of a button, that's all handled higher up in the stack, and it's been this way for pretty much all of this side of Reagan's presidency.

So that said, what stops Wayland from blowing pixels over a wire? Literally, nothing, aside from the fact that zero of the developers in Wayland see a point to it. Also, there's not that many X developers that see a point to it either. Streaming pixels over a pipe is silly and there's literally a billion different ways of doing that, that are so much better. Also, there's been a few patches to X extensions (fixes to extensions that were fixes) to make this streaming a bit better over the years, but like everything X, it has turned into a tangled mess of code that brings blood to the eyes of those who look upon it. In light of all the other things out there that do this better, there's just more places in X and Wayland to spend developer time on than in this specific area. However, anyone feeling frisky enough to tackle the X code or start building one in Wayland, by all means go for it. Saying X11 has network transparency, which it does, does not equal to you or anyone else actually sending *real mcdeal* X11 commands over a pipe, you're just sending pixels and there's nothing technically preventing Wayland from doing that too.

On top of that they're doing the #1 thing you're not supposed to do in development: completely rewriting a working system.

This is the way we've done it and by golly this is the way we're always going to do it. /s

But seriously, you'd have to use a really loose definition of "working" to apply that to today's X11 stack that most people are using. If you're looking at your Linux desktop right now and the graphics are smooth and buttery, that's not because of some awesome thing in X, it's because you have great drivers that know how to effectively by-pass X and get right to your card. By all means, let me know how well LibreOffice works using nothing but X11 primitives in Xlib.

X11's main flaw is that it's supposed to be inefficient.

It is, just so we're clear on this point. That you've not noticed it isn't because of anything X actually does. The underlying tool kits are mostly by-passing things left and right. Here's a quick overview of DRI Now most windowing systems are using OpenGL to some level to draw. You can see that just goes right to the DRI driver which then sends it on lower in the stack, no X required. Other parts of your desktop are using GLX to render indirectly, which X11 does see the package coming in, but then reroutes it to the DRI driver via AIGLX. Actual old ass X11 applications go through the entire stack before eventually making it to the lower layers, chances are, you aren't using any of them. However, I don't know what programs you're using, you could very well be breaking out twm on the daily. But the majority of Linux users are using toolkits and desktops that have long since circumvented X11's core protocol.

I'm not happy about this.

No one is happy about it. Let me clarify, no one who actually worked on X11's code is absolutely happy about this direction. That's quite a few decades worth of code that has slowly become an unusable mess for a variety of reasons that range from forward thinking that just never panned out to outright WTF were they thinking? However, the Wayland developers aren't idiots. They realize that X is going to be around for a very, very, very long time. I swear, I hear people tossing shade at Wayland like they've banned anyone from ever using X ever again. Just like IBM AS400, IPv4, and Java, we're stuck with X for the foreseeable future and thinking that the Wayland developers don't know that is just cynicism mixed with nostalgia. Wayland is something for future desktops, future devices, and future graphic cards. That might not include you over there on your 4 billion strong linux farm or on your local library desktop system or whatever. No one is saying that everyone has to move to Wayland. Now if you want to stay current with where all the major developers are going, then yeah, you're going to need to rethink your think. But FOSS allows minor players to play a role too and those minor players might not feel like moving on with the big boys. We all thought GNOME2 was dead and then MATE came along. And yeah the project is small but it's still actively maintained, and for the folks using it, it's enough for them.

This all plays into the bigger thing that's been a problem in Linux world. "Linux purity". Look, the popularity combined with the openness of Linux means that, there's going to be builds that move away from sysvinit, move away from X11, move away from things that we would commonly refer to as "Linux" (*cough*Android*cough*). There's way too many folks who are holding on to a made up idea of what "Linux" means. And this Wayland, Mir, Unity, GNOME3, systemd, and others are all flames that get started over someone's preconceived idea of what "Linux" means to them. They're still stuck in this war of, "if we don't unite, we'll never beat Microsoft" and they just hadn't noticed that the war is done, there's no one fighting anymore. People are "computering" differently now a days. Microsoft does not care about fighting Linux, they're more preoccupied with how to convince people to buy a Windows Phone or get on their cloud based service... Are mistakes going to be made along the way with Wayland/systemd/whatever stokes your fire? You bet'cha. X11 was a solution to a problem all the way up to when it became the problem. Wayland is a solution to X11's problem all the way up to when it will become the problem. And we'll invent something new then, and we'll invent something new to replace that, and on and on. One of the core things I was told when getting into the tech industry was, "This industry changes at a pace that's usually faster than people's willingness to change." I get a lot of people want to throw the usual "change for change's sake" argument as justification for why we don't need to fix something that's not broken. But we can either stagnate and find out all the fun that brings us or we can actually acknowledge we're in an industry that's always changing. Does that mean we're doing the right thing by changing? Well, no one knows that from the outset. And maybe moving to Wayland is a bad idea, but it's not a bad idea because X11 is a better option. It's not a bad idea because we should always do things the same way and never make better things. I can't readily think of a reason that makes Wayland a bad idea it's mostly aiming to do the thing we do now but better, and the number one reason I hear from a lot of people about is the whole network transparency thing. Typically it is just a misunderstanding of what X11/xserver actually do, or just up and up wrong, ill-contented, get off my grass thinking. But with anything, something is bound to crop up and sting Wayland in the ass, but the advent of 3D rendering and these massive cards that can do amazing things, is one of the multitude of things that stung X11 in the ass (multi-touch, the concept of a tablet and phone, etc being some other things), and there's no reason to just sit here and not see that. To just not notice that things are changing. We can either get on the bus or stick a fork in her and call it done as we watch everyone else fade off over the horizon.

Comment Re:$70k? (Score 1) 268

shouldn't mean they can ignore wasteful spending on a small scale.

Irrational argument #14 Value is at times a subjective thing, what you don't see value in to you, can hold immense value to someone else. Especially considering the point being talked about, that for a lot of people there's just nothing to gain from the information. People can do cost/benefit analysis and what not to justify/quantify those things, but ultimately it just boils down to what sounds better in the end. I digress though because its value isn't something I'd like to talk about. Their argument that we should cut it because it is a waste in spending is seriously short sighted when compared to the billions that are spent elsewhere, be them of value or not. The argument that it is useless is subjective, but the argument that this saves us money is dumb. So yeah, given the scale, $70k is meaningless, trying to argue it isn't is just silly.

An official said it would save $70,000 through 2020 and that the removed disclosures, salaries and appointments would be integrated into in the coming months.

If they want to say, "It was a wasteful program because it gave so little back to the public" by all means, they should just go with that. The end. No further explanation required. But adding the argument that we need to nickle and dime our budget of several trillions of dollars, that's like NYC planners doing zoning surveys based on ant hill locations. I get that it wasn't their lead argument, but someone just adding in that it saves money is really silly. So parent has a point, why even bring up the savings? They're totally meaningless. You don't like the program, then just cut it and move on. There's no need for them to resort to silly dollar figures to justify their position. It just looks silly even bringing it up. The whole security thing they talked about is enough and is subjective enough to be the main point of debate. Trying to state cost savings in their argument or in this thread is nonsensical. So before any more of us (myself included) make a mountain out of a mole hill, let's look past the $70k before we get too tightly wound up about that figure, just saying.

Still nothing but respect for you all.

Comment Re:I'm a really worried longtime Linux user (Score 2) 191

So is Ubuntu Linux effectively a dead project/distribution at this point?

Wow, hyperbole much? There's a lot of very profitable things with Ubuntu Linux and now they're going to focus on them. That your favorite part of "things Canonical" is being paired back doesn't mean the whole is dead.

A shakeup of this magnitude can't be good for the project's health.
This really makes me worry about the health of the Linux ecosystem as a whole.

Um, Linux is doing quite fine really. I think you're thinking Linux Desktop = All of Linux, which is an incorrect statement.

Between the PulseAudio, GNOME 3, Wayland, and systemd disasters, we Linux users have seen so much turmoil these past several years.

Okay at some point everyone is just going to have to move past this dead horse, it's turned into a jelly like substance from all the beating. All of these projects have evolved from the infantile stage they were once in, maybe some of the critics should too?

If the Ubuntu project falters, the Linux ecosystem will be getting even less diverse.
Even now there are fewer and fewer differences between Fedora and Debian.

The problem isn't that the ecosystem is less diverse, it's that your definition of the ecosystem is highly limited. If we limit all of Linux to just those two distros and their derivatives, then yeah, there's not much separating them, but news flash, there wasn't much separating them before.

Even the package management is almost identical now, with the main difference being whether we type "dnf" or "apt"!

(facepalm) Yes on the surface they look similar, so quick question do you scream this when talking about tools like sed, tar, diff? The whole point is to offer somewhat similar commands to make the life of admins a whole lot easier. However, if you look inside of dnf or apt you'll see that they operate differently on how they build internal databases, how they manage memory, etc. (since Slashdot loves car references) just because all cars have a gas and brake pedal doesn't mean all cars have the same engine.

This lack of diversity has resulted in stagnation.

A lot of people think diversity = innovation and that's not an exactly true statement. I think it should be obvious why that is. Additionally, if anything Linux in a broad sense is far from stagnant. Again, I think your statement comes from a limit of perspective to just the surface of a Linux Desktop. Even in Linux Desktop world a lot is going on under the hood. Not every release needs to include 50000 bells and whistles.

I really want Linux to succeed, but all of these developments leave me feeling very uneasy.

No. You want Linux Desktop to become the dominate choice and the fact is that's not happening, ever. People "computer" differently now a days and there are blends of "Linux" so to say that already address that space. RedHat or Ubuntu or whoever, might move into the workstation or they might not. But the home PC market is having a rough enough time trying to convince people to be in the "home PC market". Few if any are worrying about "home Windows market" versus "home Linux market" because they're just trying to address the core tenet here of actually getting PCs sold. So stop worrying about something that's not going to happen and be happy about the dozens of other ways that Linux has dominated in several different markets outside the PC.

Comment Re:Never understood the Ubuntu hate... (Score 1) 374

"it'll mean closed-source graphics drivers will have to support 2 display servers, and they may not want to do that"

Okay. That's sort of true, but one of the big things was that Canonical from word start seemed hostile to the Wayland community. Now that's not saying a lot because as we all know a lot of communities in the FOSS world are pretty hostile by nature. So I'm not saying that justifies the hate that went down, but it play a big role.

Here's a link from the Ask Ubuntu site and the first comment under the accepted answer pretty much sums up the frustration that a lot of folks had.

This still doesn't answer what advantages mir offers, it just answers why Wayland was not chozen

Canonical blows at PR, if they were trying actively trying to woo people to their argument, they were doing an incredibly bad jobs at it. Now I get, they're developers and they don't need to be disturbed with BS like, "Hey! Why you doing this thing Canonical? Why you no just use Wayland?" etc. However, Canonical could have easily stepped in and really done some outreach to help people get behind their brand, which they sort of did; I know Jono Bacon did a whole lot of outreach and he was pretty damn amazing at it. I personally don't think Ubuntu was the same when he left but that's seriously just me, I think. However, the point was that Canonical constantly wasn't always forthcoming about their plans and it really got heated as the infamous "Not Invented Here" argument really took them like a California wildfire. NIH basically took everything that they were working on and twisted it into a conspiracy theory of how this was all a splintering of pure bred Linux (for whatever that means).

You take that crazy NIH mentality and add it a touch of salt from people thinking that Canonical was "M$" in disguise, or they were some young upstarts (ha! I made myself laugh with upstart) that didn't understand the philosophy of Unix, there were a few more crazy notions out there but I think those two covered a lot but I digress. You take all that fervor and combine it with Canonical's lack of touching base and at times actively retreating from addressing this and it basically was a fire no one was putting out.

Now I'll say that initially Canonical did try to stick the olive branch out there, but they got a first degree burn and basically said never again (ish, but mostly just never really said anything outside their circle so it was mostly a "well we're just not going to talk to them anymore"), only later to see themselves on the spit over some coals. I don't think Canonical did anything wrong per se but FOSS seems to be a different world of thinking of software purity. That purity comes in about a billion different flavors but they range from RMS grade "open source or nothing" to RedHat grade "we are the community work with us or become an outsider." I think Canonical just simply pissed off enough of those groups to finally reach a tipping point where it became mainstream to piss on Ubuntu.

I will say this, the different communities in the FOSS world are highly ideological and that's helped them to a point, but we are reaching the top of the curve where that helps and moving into the part where it begins to start hurting. At some point these multitude of little tribes and what not are just going to have to let go of the notion of "pure bred" Linux and realize the world is changing. Things like Wayland, systemd, GNOME3, and so on are things that exist and be it that they conform to what that group thinks is good or not, they'll just have to accept the world the way it is or get busy on the alternative. However, a lot of folks seem to be content with either purify with fire or apathetically stating, "get thicker skin". Linux is getting bigger and at some point Linux is going to splinter and we're all just going to have to be okay with that, it's the natural evolution of things. However, I think everyone is so charged up about destroying Windows (er, Winblow$) that they're afraid that splintering Linux will hurt its chances at ever getting to the desktop. I hate to say it, but that ship has sailed mostly (and actually I think the ship sank in the harbor but again that's just me projecting, take with grains of salt) and there's not going to be a Linux consuming the desktop. Mobile has changed how a lot of people "computer" and so the desktop that everyone was aiming for is no longer the same thing today. But I'm getting way off point here, bad me.

Canonical did what they could to address the horde, but the horde is insatiable. Canonical thought they could just ignore the horde and that just allowed the fire to get hotter. And actually using the term horde is a bit misleading since that seems to imply that it's a singular group. The bashers of Ubuntu came from all walks of life and "tribes" to use Shuttleworth-ese. Pure FOSS hated it, pure bred Linux hated it, young whipper snappers writing GNOME hated it, KDE folks saw it as a curiosity and then the whole Kubuntu thing happened and they then hated everything Canonical... and so on. Eventually Canonical had next to no friendly tribes left and they received outsider status and next thing you know, "Hey everyone! It's cool to kick Ubuntu! Quick let's give a few swift ones to the breastbone before it gets back up!".

There a lot of history that I'm skipping over here or just lightly touching on and there is a much larger issue I'm hinting at but I'm just not the guy with the will to really want my inbox to blow up with that firestorm. So you'll just have to take my account here as a really double plus un-good summary of things that went down as seen by slack_justyb. However, I will add just one more thing and this is important. The failure of desktop convergence wasn't pivoted on how the community played with Canonical. Canonical could have totally gone rouge and never looked back. Their project was insanely huge from word go, it was IMHO an incredible undertaking when they were pitching it. I think if they had support of the community and commercial backers it would have gone a bit smoother, but they might have been able to complete the project fully solo, but it was highly unlikely. I see the failure as just a confirmation of how absolutely long the odds were. They're still doing stuff with the pieces they did complete to make money and that's great! They should profit from all of their hard work. But the ultimate goal's failure was never hard wired to community support. However, I don't think Shuttleworth is making that connection either in any of his posts, but I've read a few here that seem to imply that. What Mark is pointing out is a valid concern about the community while looking at the smoldering ruins but distinctly not trying to say that the two things are related (I'm using "ish" here again because it sort of sounds like he's trying to connect the two but he never directly holds the ruins up in protest saying "Community why have you forsaken me!?" So I'm giving him the benefit of the doubt here). I'm guessing we ought to address it before it consumes too many more folks and ultimately I think Mark is trying to use the spotlight on the ashes as a brief moment to address something he feels is more important than the funerary pyre that Unity8 went out on..

Comment Re:I don't have a problem ... (Score 1) 422

That language? The language that explicitly excludes redacted personal information covered by other statutory requirements from the public disclosure requirements? The PII that is required to be removed by this section of the law:

If you think part b means nondisclosure then you lack an understand of what nondiscretionary actually means legally.

Nondiscretionary relates to budgets not information.

(C) publicly available online in a manner that is sufficient for independent analysis and substantial reproduction of research results, except that any personally identifiable information, trade secrets, or commercial or financial information obtained from a person and privileged or confidential, shall be redacted prior to public availability.

However, while you state section C in paragraph 1. Paragraph 2 moves on to state.

(2) The redacted information described in paragraph (1)(C) shall be disclosed to a person only after such person signs a written confidentiality agreement with the Administrator, subject to guidance to be developed by the Administrator.

Again, while the language in (1)(C) feels like it would provide privacy, that's wholly dependent on the guidance that's given as stated in (2). But more so, PL 95-155 indicates in (6)(b) that

Grants made by the Aministrator under this section shall be subject to the following limitations:

Those limitation in the original law are just three but amended by PL 96-569 and made bound to Congressional approval via discretionary assignment as such. That is made clear in section three of the original law.

Appropriations made pursuant to the authority provided in section 2 of this Act shall remain available for obligation for expenditure, or for the obligation and expenditure, for such period or periods as may be specified in the Acts making such appropriations.

That means guidance for the enforcement of HR 1430 (1)(C) as indicated by (2) in HR 1430 is pursuant to the rules outlined in 42 USC 4363 as given in PL 95-155, which ultimately is Congressional consent to what that guidance would be. That obviously cannot run afoul of 42USC 1320 or 45CFR Part 162 but Congress may mandate disclosure as indicate by PL 114-38 under Title 26 when pursuant to discretionary matters of Federal employee.

In short Congress has the right to unveil anyone or anything that has tax payer dollars attached to it. They also have the right to change the guidance granted to the EPA under the limitations of section six of the original law under this bill. That means scientist will need to lawyer up to ensure that they are in full compliance of the law as outlined by subsection 1395 under the same title.

Doctor's make it look easy because an industry had to grow up around this law to ensure that compliance could be met. Scientist taking medical information would thus need the same requirements but since they're researching and not having the person actually come to them, it get a lot stickier. It would actually be easier if people who felt they had topics covered by Section 4 of 4363 related illness went to researchers as then the burden could easily fall under 1378 part d and like I said, who knows, the path might get smoothed over if such things start becoming normal. But that is not how it is done today and it seems that tying the entire process to budgetary procedures in the House is a sham way of saying, we can delay you if we don't like what you are doing.

All of that affects peer review. Yes, research should be peer reviewed, no one is saying is shouldn't. What I am saying is that researcher are less likely to publish if they feel that doing so will get their asses sued into oblivion. The law so makes that a reality in section (2) by means of section 3 in the original law. Congress can easily tie anything granted under section 4 to discretionary requirements. HR 1430 further allows this to extend to anything that might relate to anything the EPA might purpose under their authority granted to them in section 2 of the original law. And since the EPA uses published research that means that Congress can light a fire under anything that the EPA cites that doesn't meet it's "guidance". That guidance would be part of any spending and most likely every spending bill as we bounce between political parties. It's the same reason that when Congress says "nope" NASA just stops researching the environment here on Earth. That research is tied to budget guidance. Scientist not attached to NASA can keep going on and doing their research but NASA can't cite any of that research unless Congress so grants them that ability.

This does nothing to change the length of time it takes to do research

You're right anything that the EPA won't use in their rule making, this doesn't change that. Anything that's not a section 4 covered issue isn't changed either. But the stuff that is and the research that the EPA feels like it might want to use to pass rules, that affects it because the research can't be used if Congress says, "nope". Independent research and research done by a foreign country this wouldn't change. Which brings me to...

The US isn't anywhere near the tipping point, but we're not going in a direction that really encourages researchers to learn here and more importantly *stay* here

Since the original law permits foreign country research to be used and this bill does nothing to change that, it would actually be easier for someone wanting to do climate research for the US government to actually leave the country, do the research there, and then allow it to be published in a well known journal. This is literally the opposite direction the US should be taken. Fine, government wants to say fuck science, then at the very least in your bill make it impossible to use non-domestic research in domestic rule making. But not amending that part leaves open a big hole where someone overseas has it a lot easier than someone here in the States.

It has nothing to do with the peer-review process that takes place before the EPA should ever consider relying on someone's research for regulatory guidance.

Yes it does, it requires that a "minimum" amount of clearance as outlined by whatever Congress dreams up. Researchers can do whatever the fuck they want, but if it doesn't meet what Congress wants, the EPA can't cite it. So ask yourself, if you're publishing research that you're hoping that will help people become more aware of the environmental problems of this world, but the EPA can touch your research with a ten foot pole, while you are correct, "it didn't technically alter the process", you really are going to be at the end of your research wondering why the fuck you did it in the first place if your own government cannot be permitted to use said research in any of their rule making decisions.

with the specific exclusion that I just quoted to deal with PII that is required to be redacted by the previous section.

You never actually quoted it in your reply, but I'll just assume that you did and that's what you meant.

What it DOES require is that research results be provided with enough detail to be verifiable

Science by default already does that. It doesn't need this sham to try and codify that and then tie it to budget lawmaking rules. Research shouldn't be under a guidance by Congress. Congress rarely understands what the hell is going on 200 miles outside of DC, much less scientific matters. The only people who feel there's an issue with how science is being conducted currently are people who don't understand science. This bill makes about as much sense as Congress mandating people wash their hands and then tying a state's Interstate repair budget to how much Dial soap they can import and additionally stating that they don't have to further refine that law, but can instead change it when they purpose a new budget. Mandating or limiting something and then tying a budget level item to it is the whole thing that everyone hates, yet we keep letting Congress do that. So fuck it, if people feel like making research a budget level item and feel that the people setting the budget are also apt enough to dictate what can and cannot be cited as "science" by the EPA, then I guess we get what we deserve come down the road. At this point this country might as well collectively spit liquor and salt in the face of educators and researchers. We defund them. we ill equip them, and now we're just saying that some assholes with the heads up their collective asses know better than them (and yes that goes for NCLB which was stupid when it was purposed).

So yeah, while technically this doesn't do anything to science, researchers aren't lawyers and we don't have an industry currently that is adjusted enough to support all the crazy stipulations that bind medical records with the unique requirements of environmental research. Perhaps, that industry will come about in five - ten months or five - ten years, who knows. I made that clear in the first go round but I'll make it really clear here, "Ultimately this could all just be a lot of hot air as the industry may adapt at a pace that reduces the ill effects of this bill to nill." But it does beg the question why the fuck are they tying this just to section four which has everything to do with just global warming specifically? Huh? Clean water just not pegging high enough on the radar to merit tossing up about a billion legal hurdles between just research and research usable by the EPA? Superfund sites and their impact on local wildlife not requiring the same amount of "honesty" that global warming needs? There's a very specific reason this whole bill begins with amending section 6b of the 1978 act. There's a heavy reliance at 6b where funding for the line items indicated in section 4 but minus those outlined outside the current subsection (as outlined in the 1980 law). Section 4 language changed to become the mission for environmental change back in 2006, that 6b still binds this in it's own little category under all the funded groups of research under the EPA's authority just shows that Congress is still being petty about climate research.

Ultimately though, it's not like this is some super big deal because whatever it is will be struck down as soon as the House changes. It's just stupid that Congress is wasting it's time on this matter. On something that will be so clearly defeated as soon as the Republicans aren't in the majority. Additionally, that this ties it to budget makes it all the easier to kill down the road. So fuck it, I think this comment reply is the last bit of energy I'm going to put into this brain dead bill. But if you think it won't have a short term effect on research, then that's just ignoring everything that this bill talks about. If you think that this will actually make research more "honest", then that just ignoring how science works. And if you think this applies to everything the EPA does, well, it doesn't, just global warming and it's pretty obvious who the winner is just like the Dial soap bill I mentioned a few paragraphs back. But if you feel better thinking those things will happen with this bill, then by all means, don't let me come between you and bliss there.

Comment Re:I don't have a problem ... (Score 1) 422

HR1430 amends 42 USC 4363 sec 6b paragraph two adds the following.

(2) The redacted information described in paragraph (1)(C) shall be disclosed to a person only after such person signs a written confidentiality agreement with the Administrator, subject to guidance to be developed by the Administrator.

By all means you can look up 42USC 1320 and 45CFR Part 162 to see some of the process that would be required by paragraph two here. This would also seem to give the administrators some ability to setup a rule making process so lone as it ran parallel to established law.

Also, just FYI, there's a thing called Google, you should try it.

Comment Re:I don't have a problem ... (Score 5, Insightful) 422

The problem isn't scrutiny. The EPA also has to deal with medical issues that arise from environmental issues. The problem is there's currently a law that restricts medical information being handed out in a manner that agrees with the language of this proposal. Simply put, it would be impossible for the EPA to make rules on certain issues without running afoul of confidentiality laws, but that's really simplifying the process that they are outlining. There's ways to get it all to mesh well but those methods can take several years of legal paperwork which basically means that scientist will need to get lawyers at the ready should they decided to publish anything that *might* be peer reviewed.

This isn't a law hoping to add more scrutiny, this is a law to make scientific research take longer than a two term president before it even hits the peer review stage. The idea is that if science starts looking like it might hurt an industry, on the next presidential cycle, the opposing party can get someone in that will defund the whole thing, thus delaying it another four to eight years. It's entire purpose is to lengthen the process to outlandish time frames, that Congress in all of it's slow to react to anything, will have time to mount a political opposition to.

So yeah, taking a two year research project and extending it to something to the tune of twenty years isn't something I'd be so receptive to. However, it is worth pointing out, that the constant defunding of science in the US will just ultimately push scientist to find funds elsewhere. There is no shortage of nations willing to pay top dollar for people who can innovate. The US isn't anywhere near the tipping point, but we're not going in a direction that really encourages researchers to learn here and more importantly *stay* here. A lot of folks in science could not care less about politics and would greatly like it for Congress to bind it to being political. Basically tying research to Presidential schedules runs counter to that whole idea.

But who knows, maybe the whole legal process will become streamlines with zero butt-hurt changes from Congress along the way and lawyers and scientist will be in good company and all of the road blocks that I mentioned will never come to pass, who knows!?

Comment Re:Counting water (Score 1) 331


So often people say this kind of crap about water and the **ENTIRE** point is water that is usable! It takes energy to make random source of water into water that we would call "drinking" water or water we would give to cattle, etc. Water doesn't just magically revert back into "usable" water once it is consumed. Granted that right now the major pusher for recycling water is the sun energy via evaporation. However, then we're at the whims of where the water falls and when. So we either have to get better at using the water when it randomly hits the ground (large collection pits and storage systems), or we need to get vastly better at moving water that's already hit the ground (national pipe work for moving water all over the US), or some combination of both.

Same problem can be said for wind power, we're just hoping that the wind is randomly blowing in some section of the planet we have mills in, but the plus side of that is the energy we generate from the random spots wind can be easily moved around on power lines. There isn't an easy moving around for water at the current moment. So while yes the absolute amount of water on the planet hasn't changed, the amount of energy it will take to get it back to the form it came from is high, but we don't notice it since we mostly rely on the free energy from the sun to take care of it and hope all of the plus and negatives just wash out in the end. At the rate aquifers are being drained versus the rate at which they can be refilled by nature, were in serious negative territory. Nature just doesn't move as fast as industry can produce. The reason we still stay afloat is because nature had a few million years on us to build those reserves.

Well, in case of meat production — or indeed any other Earth-bound activity — no water is lost. Zero. Nada. So, what is the quoted statement supposed to mean?

We are never going to run out of water in an absolute sense, that's just stupid. But we will run out of economically viable water, that's the entire point. When water becomes too expensive to actually buy/refine/return back into a usable form/whatever, it won't matter how much absolute volume of water is on this planet, you will have no access to it unless you have enough money for it. The same is true for crude oil. This planet will never have zero mL of oil on it, ever. Thinking otherwise is ignoring how absolutely massive the amount of crude oil on this planet is. However, we are quickly running low on economically viable crude oil. At some point, oil will become so expensive that the majority of people will choose another option or they'll be up a shit creek without a paddle. The entire point of anything is to try and get ahead of the curve so you don't find yourself on that creek.

Yes, parent said all of this already in their comment, but I feel that if it isn't S-P-E-L-L-E-D out, that some folks might not get it. We're past the point in which nature can resupply water sources as fast as we use them. We either need to resupply those sources or we need to get better at using the sources, because not doing either of those is slowly going to increase the price of everything that depends on it and for water that's a lot of things.

Comment Re:So wait, where do they get the sodium? (Score 1) 197

Wait isn't the process for sodium and chloride separation by electrolysis expensive by itself, and hence the reason we mostly use the Solvay process? Additionally, the Solvay process creates by-products that have no current use. Actually I think that's the reason Onondaga Lake is a superfund site today because they just kept dumping the by-product in the lake.

Do we have a clean, cheap way to separate sodium and chloride? Because I'm not coming up with one in my mind, but it's been forever since I studied chemistry.

Comment Have we all forgotten how things are played? (Score 1) 267

I'm just astounded at the number of folks on Slashdot pointing out "these things have been long time in the works, Trump played no part in this!" That's some USA Today level commenting. Yes, we all know, anything good and the President-elect takes credit, anything bad and the President-elect places blame on the current President. This play is about as old as all get out.

If anyone is on here stating the obvious thinking they somehow are revealing the lie, well my assumption is that the Slashdot users are a little more intelligent to not fall for the "look at what I did" game. If there's anything to note about this, is that it is starting to look like the majority of jobs that Trump aims to "bring back" to the USA are going to be low waged, we're missing one piece in the automation process, jobs that aren't going to on large scales do much for the economy. In order for Trump to make good on the infrastructure changes that he's aiming for and the tax cuts that he's aiming for, he's banking on 4% GDP growth (Note - From rightest leaning website I could find carrying it.) for every year he's in office. You can head over here to see what's been the going rate of change. You'll see lots of ups and downs that average over a year's span don't come out to 4%, ever.

If the old Trumpster fire thinks he's going to get to his goal with repeating over and over the 8000 of jobs that are being indicated here, he's dead wrong. They're jobs, yes. However they do not pay enough, to move the needle much. Even if this was repeated every day dude was in office. Just to note, that Carrier deal that Trump thumps, I'll just give him the benefit of the doubt and call it 1000, we'd need roughly five of those per day for every day he's in office for the next four years to reach the GDP growth he's aiming for, if we strictly keep it to trying to grow the economy.

Slashdot Top Deals

Real wealth can only increase. -- R. Buckminster Fuller