Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment No you do not need a fan. Skip the official case. (Score 5, Informative) 314

I have been running my Raspberry Pi 4 without temperature issues but without the official case. As soon as I put the Pi4 in its case and I close the lid, the temperature increases until it has to throttle the speed. But outside the case it works fine, especially when I put the Pi4 vertically on its side.

I just checked the temperature with a light load on the Pi4 and no display:
$ vcgencmd measure_temp
temp=55.0'C

So everything is fine when I do not use the official case. This temperature is very similar to what I get from my old Pi model B or Pi3B+ that are in the same room and operating at temperatures between 49 C and 65 C depending on the load. The Pi4 is a couple of degrees warmer under comparable load, but it works fine without a fan.

However, I do not understand why the Raspberry Pi Foundation designed the official Pi4 case as a completely closed box, unlike the official Pi3 case from which it was easy to remove the top or the side panels to provide some additional air flow. It is possible to use the Pi4 case without the top, but there are no side panels so there is very little air flow under the board. This is a weird design decision that causes the board to overheat rather quickly.

Those who purchase the Pi4 Desktop Kit with the official case will get a much better desktop experience if they do not use the case (or at least remove the top part).

Comment Re:What happens with recoil?? (Score 1) 127

The flyboard could even pre-compensate and gain a few milliseconds by detecting when a shot is fired and by increasing the thrust on the rear jets before the recoil tilts the flyboard backwards. The detection could be active if there is a link (wired or wireless) between the gun and the flyboard, or it could be based on sound although the latter would be less efficient because of the delay. An active link from the gun could even inform the flyboard that the gun is about to fire, in case the flyboard has a mechanism to prepare some extra power to be released when the shot is actually fired (but not before, otherwise aiming would become impossible).

I assume that they already patented that idea. ;)

Comment Re:Nah, they're being duplicitious (Score 1) 138

Indeed, they haven't backed down at all.

This also means that the only way to run an ad blocker extension with Chrome will be to give Chrome the full list of sites that you want to block. Even if that list is not immediately transferred to Google, they can still analyze it and check what Google ads are blocked, how well Google ads are doing compared to their competitors, etc.

If I want to block some trackers that invade my privacy, Google will make this harder (if the block is currently based on a pattern rather than a static list of URL) and in addition they will know about it. They still present this change as a nice feature, but this is actually a privacy disaster. And they are obviously trying to confuse the journalists by claiming that they have changed their mind while they are still pushing the exact same bad idea (just with a slightly higher limit to the static list of URLs, which is the broken concept).

Comment Re: The kernel is great. The userland isn't. (Score 4, Interesting) 69

10 or 20 years ago, Debian and GNOME 1.4 or 2 worked perfectly for me... on my single-CPU, single-screen computer that was always connected to the same network.

Nowadays, I expect my multi-core laptop to switch networks automatically when I move from home to work and vice-versa, to have a seamless multi-screen display when I connect it to external monitors, to switch the audio input and output automatically when I plug in my headset with microphone or connect to a monitor with built-in loudspeakers, and to run multiple virtual machines or containers easily.

I still use Debian today (as well as Ubuntu, SuSE and CentOS, as a main OS or in various VMs). I am not so happy with some of the choices made in GNOME 3 and I understand some of the frustration around systemd, but I also admit that many things that were a pain to configure 10 years ago are now working out of the box in most cases.

FWIW, I have been using Linux since 1992 when SLS was distributed on 12 floppy disks and only had a half-working TCP/IP stack. I recycled those floppies in the meantime but I still have my CD-ROM "Ygddrasil Linux/GNU/X Fall 1993". Sometimes I look back at these old systems with a bit of nostalgia, but I also know that I am more productive today with a modern version of Linux.

Comment Re: cert expiry fail (Score 2) 158

Verifying the signature of a piece of piece code that is already installed is very different from verifying the signature of something that you are about to install.

If I am installing something new, I would not trust a signature if any part of its certificate chain has expired. But if the code has already been installed and if it was trusted before the certificate expired, then I am much safer because I only need to verify that that code has not changed since then (and this can be done in more than one way, not necessarily based on the certificates).

The problem that has hurt Firefox in the past is that some programs were installing unwanted extensions or modifying existing extensions without the user's knowledge or consent. It was easy for the malicious programs to do that by adding or modifying XPI files in the user's profile directory. Signed XPI files are much harder to temper with, as long as the certificate chain is safe. The limited duration for the validity of the certificates is one way to reduce the risks that someone cracks a signing key. (Note that the scope of the defense is limited to the files in the user's profile directory, not the installed files of Firefox itself.) But the problem is simpler if you only want to verify that some files that were already verified in the past have not been modified.

After an extension has been validated and installed, Firefox could record a signature of that code in a safe place so that it could still use that signature even if some parts of the certificate chain have expired. That signature could be the original signature of the package, a locally generated signature, or preferably both. A locally generated signature would be unique for each user, so it would be harder for a malicious program to install an unwanted extension for all users. The "safe place" for the list of validated signatures could also be part of the user's profile but it could be protected against tempering by signing it or even encrypting it with a separate key. And if you want to be extra safe, then use a public/private key pair for that and make sure that the private part is not available without user interaction (password-protected key) and/or without connecting to the Mozilla servers.

Comment Re:Spot the blame-jumping (Score 1) 68

Sure, this is clearly a shitty thing for an extension to do - but the real blame lies squarely with the FF devs. On what fscking planet is there justification for ALLOWING an extension to access history in the first place?!

Your criticism is misdirected. Stylish does not need to access your browsing history (something that Firefox can block). But Stylish is designed to be active on every page that you visit so that it can apply custom styles for that site or tell you if some user styles exist for that site, Stylish sees every page that you visit, so it can collect and transmit its own view of your history. And unfortunately, that history can include some sensitive information as explained in the article.

Comment Re:Maybe easier (Score 1) 59

Maybe easier is to spot images where PS users kept exif or other information telling that it was PSed (personally I use Gimp)

Obviously, if you want to create a fake you should either remove all metadata (EXIF, IPTC, XMP and proprietary tags) or copy it from the original image. If you claim that you took an image straight from your camera and it contains Photoshop tags or a comment "Created with GIMP", then you will be busted.

Tampering with the metadata is an important step in creating good fakes. However, there is a lesser known property that can often identify the true source of a JPEG image: its quantization tables. The JPEG quantization tables are included in every JPEG file and determine how the image is compressed. GIMP and all other programs using IJG's libjpeg or variants such as libjpeg-turbo are using predefined quantization tables that can easily be recognized (each of the quality levels from 0 to 100 generates a fixed set of 64 integers for the luminance and chrominance quantization tables). Similarly, Photoshop has its own tables for each quality level. Most cameras also have their own quantization tables that are different from GIMP's and Photoshop's; they may even be generated or adjusted dynamically depending on the contents of the image.

I analyzed the quantization tables from GIMP (libjpeg), Photoshop and many cameras more than 10 years ago. I even wrote a blog post in 2007 doing a rough comparison of the quality levels of each program. Even if an image has no metadata or if its metadata has been tampered with, one can tell if it has been modified with GIMP or Photoshop by looking at its quantization tables. I found an interesting article published a year later at the Digital Forensics Research Conference 2008 that explains this clearly: "Using JPEG Quantization Tables to Identify Imagery Processed by Software" (also availble here).

If you use GIMP to save a fake image, then there is an advanced option in the JPEG Save dialog that is only available if you started from a JPEG image. That option is "Use quality settings from original image", which actually copies the quantization tables from the original image instead of letting libjpeg generate new tables based on the quality level that you selected. That would be one more step towards creating a perfect fake.

One more thing that could betray a fake image is the order of the markers and segments in the JPEG file. A JPEG file is made of several segments that are preceded by a two-bytes marker identifying the type of segment: start of image, quantization tables, etc. GIMP (libjpeg) and Photoshop store these segments in the file in a different order. They may also add some markers that are usually not present in the files saved by the cameras, or remove some of them (restart markers, proprietary APPn markers, etc.). The GIMP JPEG plug-in will not allow you to create a perfect fake because it will always save the markers and segments in the same order, but with the right tools it should be possible to re-order them like if the file had been created by a camera.

If you are good at it, you could modify all metadata so that it looks identical to what a camera would produce. Then the only things that could betray your fake image are the pixels of the image: local variations in noise, contrast, etc. This is what is explained in this article about Adobe using AI to detect these variations.

Comment Re:Easy to fool... (Score 1) 59

The problem with your step 3 (Scan or take photo of image) is that it is very hard to do it well. It is not trivial to take a photo of a printed image with uniform lighting, no glare, uniform focus, and generally no trace of the paper grain. Scanning could work, but then the image will not be coming from a camera.

Also, the new image from the camera or scanner will contain EXIF metadata that usually includes information about the focal distance, exposure, etc. They will not match the conditions under which the original photo was taken. And if you strip all metadata from the image, then it will look suspicious because nowadays almost all cameras include EXIF metadata in the files that they save.

Of course, if you have the right tools you can also edit the metadata or even merge the EXIF metadata from the original image into the new image. This could work, but then it is not so easy anymore.

Comment Re:Without a fuck-ton of false positives..... (Score 2) 170

.... did the court offer any suggestions as to how, technically speaking, Youtube was supposed to achieve this?

Technically, there are many ways to do this. For example, YouTube could only allow uploads from those whose identity has been verified (so that law enforcement can come and say hello to them if necessary), or YouTube could hire a huge team of moderators to review all videos before they are visible online, or they could combine both by introducing a collaborative pre-moderation system in which the users whose identity has been verified can approve a video posted by someone else (and share the liability for any copyright infringement), or they could enhance several mechanisms for automatic detection of known copyrighted works, etc.

Technically, this is not a problem. Of course, most of these solutions would significantly hurt YouTube's business model and could have a chilling effect on the users who might then refrain from posting anything if they fear that they would be sued. Finding a solution that works technically and that also makes sense from a business point of view is much more difficult. But it is not the role of the court to find a better business model for a company; their role is to ensure that YouTube and other video hosting platforms respect the law.

The very *best* case scenario here if Austria gets what they are asking for is that this is going to result in entirely legal videos which might contain parody, satire, or commentary on copyrighted works being blocked from being viewed in Austria, as well as any other entirely original works that might happen to have some superficial similarity to a copyrighted work. It only goes downhill from there.

I agree that there is a huge risk that pendulum swings too far in the opposite direction, which would result in misuse by the copyright holders targeting perfectly legal works (parody and other cases of fair use). But I doubt that we will see that anyway, because it is unlikely that this preliminary ruling will survive for a long time.

Comment GitLab, Bitbucket or SourceForge, but not so easy (Score 4, Insightful) 241

There are several alternatives to GitHub: GitLab, Bitbucket, SourceForge and probably many others, including self-hosting (which could be a GitLab instance or any homegrown system).

One of the nice features of git is that it is trivial to move a git repository to a different host while preserving the full history. However, a Free Software project is much more than just the source code: the multiple links between the commits, the pull requests and the issue tracker are also very valuable. Unfortunately, this part is much more difficult to migrate correctly because the ids in the issue tracker may change, the references to related repositories may be broken, the user identities may not match, etc. These things are also part of the project and there is a risk that some of them are lost when the project migrates to another host.

It is nice that SourceForge offers a tool to import a GitHub project. However, this tool is not perfect and suffers from some of the issues mentioned above: for example, all GitHub issues imported as SourceForge tickets will require a lot of manual editing to fix the ownership of the tickets, and it will not be possible to fix all comments. Besides, there is the issue of trust: I was a GIMP contributor and I was hurt by what happened three years ago when SourceForge hijacked the distribution of GIMP for Windows and replaced Jernej's installer by another package containing malicious software. Although I appreciate SourceForge's efforts to reverse the damage done at that time, I do not think that I would trust them for respecting the privacy and security of their users. They had broken their promise (they promised that they would never hijack a project, but they did exactly that a few months later) and now I am worried that it could happen again even if the SourceForge team has changed.

GitLab also offers a tool to import a GitHub project. That tool is not perfect either, but it is better than SourceForge's tool, for example for importing pull requests and their review comments. And also better than Bitbucket, which requires a lot of manual steps for importing a project.

So from my point of view, GitLab seems to be the best alternative for the migration. Its web interface also feels a bit lighter than BitBucket's. However, I will probably wait a bit and see what will change in GitHub before moving any of my (private) projects because the migration effort is significant.

Comment Re:Glad its GNOME and not KDE (Score 1) 150

There is fairly good proof KDE is the more complete and better thought out desktop environment.

You are a rather successful troll because you comment was quickly moderated to "+5 Interesting" although it is just a rant without substance. Congratulations for that!

Regretfully no amount of money can make Gnome catch up, it was broken from the start.

And of course you have a proof of that as well?

Comment Re:Tangent: Stallman says software is political (Score 1) 521

When people call an OS by its kernel's name they're being remarkably inconsistent

Think about "Windows Subsystem for Linux", which contains no Linux code but is designed to run "Linux" programs on Windows 10.

That being said, I still have my Yggdrasil "Linux/GNU/X Fall 1993" CD at home. It is funny how this naming debate keeps coming back, with remarkably few new arguments.

Comment Re:Nothing "new" here (Score 1) 553

The law should be ignored by all non-EU web sites.

That law can be ignored by non-EU web sites that are not doing any business with EU citizens or companies.

But if you are doing business with the EU, then you have to comply (as with many other laws that apply to international business, so this is not unique to the EU). Of course if you break the law it will be a bit more difficult for your victims to sue you if you do not have any presence in the EU, but it will still be possible.

As the FAQ says (italics mine):

Who does the GDPR affect? The GDPR not only applies to organisations located within the EU but it will also apply to organisations located outside of the EU if they offer goods or services to, or monitor the behaviour of, EU data subjects. It applies to all companies processing and holding the personal data of data subjects residing in the European Union, regardless of the company’s location.

Comment Re:Robots.... Dash, Sphero, and Ozobots (Score 1) 353

I would add Marty the robot to this list : https://robotical.io/

Although Marty it is a project that only started shipping recently after finishing its crowdfunding campaign, it is very interesting because it is fully open (you can even 3D print the parts yourself) and it can be programmed in Scratch, Phyton, Javascript or whatever language you want because the API is open and easy to implement. The robot comes as a kit, and once assembled it can be controlled over wifi. I got mine a few days ago and I am having fun with it.

Instead of teaching my kids to program, I would rather get them interested first. Do not teach them how a program works and how to write a program from scratch. Instead, start from an existing program that already does something useful and show them how they can change a few lines to get a different output from the program or a different behavior from the robot. Then let them play with it and experiment on their own without too much supervision. If they are curious and show interest in learning more, then explain the concepts and the basics of what they have been playing with. If they stop playing and think that it is boring, then do not force it on them but try something else instead (another type of program, another thing that is fun to modify). If you try to teach them how to program before they ask for it, then there is a risk that they will give up and stick to playing games or spending their time on social networks instead of discovering how much fun there is in programming.

Slashdot Top Deals

I program, therefore I am.

Working...