Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
United States

NSA Infiltrated RSA Deeper Than Imagined 168

Rambo Tribble (1273454) writes "Reuters is reporting that the U.S. National Security Agency managed to have security firm RSA adopt not just one, but two security tools, further facilitating NSA eavesdropping on Internet communications. The newly discovered software is dubbed 'Extended Random', and is intended to facilitate the use of the already known 'Dual Elliptic Curve' encryption software's back door. Researchers from several U.S. universities discovered Extended Random and assert it could help crack Dual Elliptic Curve encrypted communications 'tens of thousands of times faster'."
Businesses

Apple, Google Go On Trial For Wage Fixing On May 27 148

theodp writes: "PandoDaily's Mark Ames reports that U.S. District Judge Lucy Koh has denied the final attempt by Apple, Google, Intel, and Adobe to have the class action lawsuit over hiring collusion practices tossed. The wage fixing trial is slated to begin on May 27. 'It's clearly in the defendants' interests to have this case shut down before more damaging revelations come out,' writes Ames. (Pixar, Intuit and LucasFilm have already settled.) The wage fixing cartel, which allegedly involved dozens of companies and affected one million employees, also reportedly affected innovation. 'One the most interesting misconceptions I've heard about the "Techtopus" conspiracy,' writes Ames of Google's agreement to cancel plans for an engineering center in Paris after Jobs expressed disapproval, 'is that, while these secret deals to fix recruiting were bad (and illegal), they were also needed to protect innovation by keeping teams together while avoiding spiraling costs.' Ames adds, 'In a field as critical and competitive as smartphones, Google's R&D strategy was being dictated, not by the company's board, or by its shareholders, but by a desire not to anger the CEO of a rival company.'"
Transportation

Security Evaluation of the Tesla Model S 93

An anonymous reader writes: "Nitesh Dhanjani has written a paper outlining the security mechanisms surrounding the Tesla Model S, as well as its shortcomings, titled 'Cursory Evaluation of the Tesla Model S: We Can't Protect Our Cars Like We Protect Our Workstations.' Dhanjani says users are required to set up an account secured by a six-character password when they order the car. This password is used to unlock a mobile phone app and to gain access to the user's online Tesla account. The freely available mobile app can locate and unlock the car remotely, as well as control and monitor other functions.

The password is vulnerable to several kinds of attacks similar to those used to gain access to a computer or online account. An attacker might guess the password via a Tesla website, which Dhanjani says does not restrict the number of incorrect login attempts. Dhanjani said there is also evidence that Tesla support staff can unlock cars remotely, leaving car owners vulnerable to attackers impersonating them, and raising questions about the apparent power of such employees to locate and unlock any car with or without the owner's knowledge or permission. In his paper, Dhanjani also describes the issue of Tesla's REST APIs being used by third parties without Tesla's permission, causing Tesla owners' credentials to be sent to those third parties, who could misuse the information to locate and unlock cars."

Comment Weasel words (Score 4, Insightful) 155

Just some days ago we were already told that the Free Software Community hates Canonical. Then again, who is this Free Software Community? I've been using free software since before it was fashionable to call it thus, so I think that I use lots of software coming from the Free Software Community. Today I happen to use some pieces of free software from Canonical. Of the works by some of the persons spotted in TFA as speakers for the "Free Software Community", I use nothing, so I see more contribution to the Free Software Community from Canonical than from them.

Don't like software form Canonical? Don't use it. They're a commercial company, so they have to break even ultimately. I understand if, after listening to everyone, they make their own decision. Their Mir project is all about Ubuntu phones: should that platform be successful, they'll take the merit, should they fail, the Free Software Community will still have Android as their reference platform. Even if Google is a commercial company, too, and compared to them Canonical is Candy Candy.

Comment Re: What an open source baseband can be. (Score 1) 137

No matter how many internet comments you post, you still can't prevent other people from posting their own ones. A badly behaving radio will prevent all other radios working on the same frequency bands from operating correctly. It doesn't take a very powerful radio to cause massive denial of service. Also, a hacked radio could make use of frequencies that its owner hasn't paid to use.

Comment Re:Irrational open source fanboys (Score 2) 137

because they can't just hard code that right into the chip and never let you see it ...

No, because we would see either the software interfacing with the hard-coded backdoor, or some undocumented hardware means of communication coming out from the chip, and we'd start asking questions.

So if I just embed my code into the processor itself, you won't bitch.

Thats just silly.

Embedding code in (readonly or flash) ROMs is actually preferred from Stallman's point of view, because it allows the hardware to work out-of-the-box when using free software to control that hardware. Binary firmware is problematic for free software operating systems, not because free software enthusiasts have some maniac obsession about not running binaries that they haven't compiled themselves, but because the copyright holders of the firmware binary blobs often attach very restrictive licensing conditions to them, making them very hard or impossible to redistribute.

Comment Re:I dont get it (Score 1) 551

Now name one that the UN left for the US to hop the bag on for over a decade instead of taking care of business

Then next time Russia or some other country you don't like "takes care of some business" without waiting for the U.N. don't act outraged. Principles can't be bent to one's convenience.

United States

White House To Propose Ending NSA Phone Records Collection 208

The New York Times reported last night that the White House is planning to introduce a legislative package that would mostly end the NSA's bulk collection of phone records. Instead, phone companies would be required to hand over records up to "two hops" from a target number. Phone companies would be required to retain records for 18 months (already legally mandated) instead of the NSA storing records for five years. It does not appear that secret courts and secret orders from the court would be abolished, however. From the article: "The new type of surveillance court orders envisioned by the administration would require phone companies to swiftly provide records in a technologically compatible data format, including making available, on a continuing basis, data about any new calls placed or received after the order is received, the officials said ... The administration’s proposal would also include a provision clarifying whether Section 215 of the Patriot Act, due to expire next year unless Congress reauthorizes it, may in the future be legitimately interpreted as allowing bulk data collection of telephone data. ... The proposal would not, however, affect other forms of bulk collection under the same provision."

Comment Re:I dont get it (Score 1) 551

Iraq was invaded without U.N. authorization because the U.S. had produced false evidence of weapon of mass destruction being stored inside Iraq. The satellite countries that agreed to take (a minor) part in the invasion stated officially that they were convinced by the U.S.' false evidence, in reality all they were after was them to partake into the feast of the war's aftermath - they gave so they could receive.

There are other countries that repeatedly defy the United Nations and the U.S. would never invade.

Which is not to say that Saddam Hussein was a nice guy and the U.S. are the empire of evil - I certainly am happier to live under the U.S.' influence rather than Russia's - but let's not paint conflicts of political interests with manicheism.

KDE

KDE and Canonical Developers Disagree Over Display Server 202

sfcrazy (1542989) writes "Robert Ancell, a Canonical software engineer, wrote a blog titled 'Why the display server doesn't matter', arguing that: 'Display servers are the component in the display stack that seems to hog a lot of the limelight. I think this is a bit of a mistake, as it’s actually probably the least important component, at least to a user.' KDE developers, who do have long experience with Qt (something Canonical is moving towards for its mobile ambitions), have refuted Bob's claims and said that display server does matter."

Comment Google arrogance (Score 5, Informative) 363

They didn't address any of the problems. They just called them "myths" and said "don't worry, trust us, everything will be fine" for each one of them. And they did so using condescending, arrogant and insulting language (look for example at the passage when they declare that they want people to wear google glasses inside locker rooms (!): "just bear in mind, would-be banners..."). This reinforces in me the distrust in the company and the concern about the product.

Comment Re:Why should I drop glibc? (Score 1) 134

Except statically linked binaries. Those were linked at build time, but they don't invoke the linker when they are executed. Guess what I use musl for? Building static binaries.

But you're changing the goalpost now. What I said was troublesome was using musl (or anything else, if glibc-incompatible) as a replacement for glibc. Using it for selected static binaries that you build yourself is another thing.

Nope. Binaries I only need one or the other, not both. Libraries only when I need a musl version of a library.

That's because I was talking about replacing glibc with musl, and be able to run binaries, over which you have no control, written for any of the two libraries. Which is the only scenario that would makes final users happy, should glibc coexist with musl.

No. Only for shit software that doesn't have any kind of protocol.

Again, that's stuff that is out of our control. Engineers have to design their systems for the worst case, not for the best.

Wrong. ISO C specifies stdint.h, int64_t is a standard way to get a 64-bit signed integer, and uint64_t is standard for 64-bit unsigned integers. The fact that you don't know ISO C is illuminating.

Microsoft's stuff is compatible with ISO C90. uint64_t was unsigned __int64 in their world until very recently.

Never works in practice. If the prototype for a library function is the same, and the calling semantics is the same, then it can be relinked without recompilation whether it was statically (unless it was stripped) or dynamically linked, and if either has changed, the code needs to be reviewed, rewritten and recompiled in either case.

With versioned symbols (glibc) you haven't this problem. But if you add support for them in your library, then you add "bloat". So you have to decide whether you want to add "bloat", or to make your users unhappy.

I don't know what adding getline has to do with existing programs. If they are not already making use of getline, then even if they are recompiled, they will still not make use of getline, and will not require it's symbol for linking. The Austin group (POSIX) are careful to not break stuff when revising standards, though I'm sure you can still point to some breakage; in general they tend to revise things by adding new symbols, or assuming the greatest common behavior between implementations.

They added getline() to the standard, and doing so they broke all the software that, perfectly standard-compliant until then, used the getline() name for their own stuff. This is to say that imagining to resolve binary compatibility with standard compliance simply doesn't work. They're two different problems.

Autotools gives me, as a user, and have done so since the 90s: - cross compiling support;

Never works.

I've bootstrapped my entire 64-bit Linux installation from a 32-bit host using it. It has almost always worked for me, even for packages whose developer hadn't ever thought about the possibility of cross-compilation. I have no doubt that packages using autotools were the ones that gave me fewer problems.

However plain old Makefiles, I just set CC, CFLAGS, LD, LDFLAGS and LIBDIR and things just work (also CXX and CXFLAGS for C++).

Only for the simplest cases. When things get more complex, you'll have to handle the difference between the host C compiler (which can be used to compile stuff which will run on the build host, such as code generators) and the target C compiler (which can only be used to produce binaries that won't run on the build host).

- ability to change any installation path;

I can do that with "PREFIX=/foo/bar make install" with any well written makefile.

That's the difference, with autotools you are almost sure that you get those features out of the box, and they're standardized, with other systems you have to hope that the developers wrote their makefiles well, and I can tell you that the current trend among developers is that they like less and less to invest time into packaging their source code. You also have to study the makefiles and see if it's DESTDIR or INSTALL_ROOT or ROOT_PREFIX or something else. Also, consider the difference between PREFIX and DESTDIR. The first can end up into paths stored in the generated code. The second won't.

- support for building shared and static libraries simultaneously using the best compiler options for each case; and probably something else that I'm forgetting.

make staticlibs; make dynamiclibs

Then you have to take into account the different flags required for building static vs shared libraries on every platform that you want to target (e.g. Linux supports non-PIC code in shared libraries on i386, other OSes don't, x86_64 never supports it...). With autotools you get that for free and out-of-the-box.

Fixing a broken homemade Makefile takes me a few minutes. Fixing a broken autobarf takes me hours to days.

Modern .ac files are much, much simpler than the Makefiles and scripts that they generate. I can't see how you find it more difficult to fix them rather than the Makefiles themselves. Have you ever tried to debug a problem with a .cmake file and its arcane language? And what about scons, it works basically with raw python scripts...

Porting code to 9front is basically a rewrite as it's so alien, so let's put that to one side.

I don't think autotools work well, if at all, outside UNIXish systems. Even on Windows, they require impedance adaptation layers such as cygwin or mingw or interix.

Comment Re:Why should I drop glibc? (Score 2) 134

Virtually all binary software is distributed in enormous tarballs containing their own libc, and every other library they use, and a bunch they don't simply because static linking doesn't work against glibc properly (and violates the license for non-free software).

Please don't spread false information this way.

Take Firefox for example, from Firefox.org, not the version shipped with your distro. Every Firefox tarball includes their own build of GTK, GDK, glibc, libnspr, etc.

Yes, let's take the version from firefox.org that I'm using right now.

$ ldd /opt/firefox/firefox-bin
linux-gate.so.1 (0xb77bb000)
libpthread.so.0 => /lib/i386-linux-gnu/libpthread.so.0 (0xb776b000)
libdl.so.2 => /lib/i386-linux-gnu/libdl.so.2 (0xb7765000)
librt.so.1 => /lib/i386-linux-gnu/librt.so.1 (0xb775c000)
libstdc++.so.6 => /usr/lib/libstdc++.so.6 (0xb7675000)
libm.so.6 => /lib/i386-linux-gnu/libm.so.6 (0xb762f000)
libgcc_s.so.1 => /usr/lib/libgcc_s.so.1 (0xb7614000)
libc.so.6 => /lib/i386-linux-gnu/libc.so.6 (0xb746a000)
/lib/ld-linux.so.2 (0xb77bc000)

See? It's using my libc, so what you said is completely incorrect. Shall we see if it uses its own GTK library as you've said? I'll past the ldd result here: http://pastebin.com/UsP0MHUe , so you can see for yourself that firefox uses the system libraries instead.

Because dynamic linking is so horribly broken on Linux that is the only way it could possibly work on more than one distro.

Dynamic linking on Linux is state-of-the-art. If it doesn't work for you, look for responsibilities somewhere else than the operating system. If anything, it's static linking which is badly supported (by glibc, not by Linux).

SORRY.

...

Comment Re:Why should I drop glibc? (Score 1) 134

Actually it is.

It's linked by any executable on your system and it has the role of loading every single dynamic binary that you have installed, including init. It's special.

I have both musl and glibc on my systems, and they have different filenames, so I can have both installed without conflicts.

Of course you can have both installed, but not without conflicts, you must be very careful to avoid the sea of conflicts that arise when you have two incompatible libraries with the same name.

Programs interoperate through IPC and the filesystem, so having programs built against different libc on the same system is not a problem.

You'll have to:
- duplicate every single binary and library to have libc and musl versions;
- ensure that the version of each binary is the same for its libc and musl image (otherwise IPC and filesystem communication will fail);
- ensure that the libc and musl versions of the binary are installed in two different paths and don't interfere (otherwise a libc program dlopening a musl plugin will fail), and rewrite all the software which uses hardcoded paths (e.g. Python);
- ensure that libc and musl types are binary-compatible (otherwise IPC and filesystem communication will fail);
- ensure that every package you compile finds the correct version of both libc and musl when you compile it;
- have your machine withstand the memory pressure of two different and unshareable userspace images running simultaneously (so much for fixing "bloat");
- ...

And musl and glibc are both source-code compatible in that they both implement a POSIX and ISO compatible libc.

Even Microsoft Visual C++ is ISO C compliant. So you can write programs that will compile on both GLIBC and Visual C++. As long as you don't use non-standard stuff such as 64-bit integers.

The only place where you would get source incompatibilities is in non-conformant programs that take advantage of defects or non-standard APIs of one of the libc, and the real solution here is to have developers fix their code so that it conforms to the standard.

If I am a user, I want my binaries to run and that's it. I don't have the option to ask whoever coded the programs that I use to rewrite them because I don't like their choice of API usage.
Furthermore, any library that is actually deployed and used will get bugs fixed, which can cause incompatible changes, and standards will be amended in ways that can break compatibility (for instance, the addition of getline() to the C library). What do you do then? Ask users to rewrite all the programs that they use every time such a thing happens?

the headaches around trying to get Gnu's arcane and defective build system to work

I've heard millions of times that autotools suck. That's true. The problem is that everybody else (and I mean scons, cmake, waf, ...) sucks even more. Autotools give me, as a user, and have done so since the 90s:
- cross compiling support;
- ability to change any installation path;
- ability to have a temporary installation path for packaging;
- ability to apply a transformation to program names;
- a standardized way to change any of the tools to be used during compilation;
- support for building shared and static libraries simultaneously using the best compiler options for each case;
and probably something else that I'm forgetting.

Slashdot Top Deals

"What man has done, man can aspire to do." -- Jerry Pournelle, about space flight

Working...