Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment Re: Is the Apple AppStore next? (Score 2) 103

Epic already lost its suit against Apple months ago.

I'm not an expert, but doesn't it seem like the legal results are backwards (assuming different results for the cases are legitimate at all)?

Unless I'm mistaken, it is officially allowed and fairly straightforward for a user to install anything on Android without going through Google, but IOS only allows that if you (as a user) sign up as a developer (including a fee, I think). Doesn't that mean Apple is a worse/stronger monopoly than Google?

Comment Re:The 'Java Way' is absurd complexity (Score 5, Interesting) 145

Along the same lines, there is also Benji Smith's 2005 hammer factory factory post on the now-defunct joelonsoftware discussion forum. The original is long gone, but there are at least two copies elsewhere:

https://factoryfactoryfactory.net/

https://medium.com/@johnfliu/why-i-hate-frameworks-6af8cbadba42

(The original also had some good follow-up discussion, but that is harder to find.)

Comment Re:They also enforced having OAuth2 which blows (Score 1) 72

I agree OAuth2 is a real PITA; I've got a few gmail accounts and it has been getting insane. They don't really seem to want to support POP/IMAP and other non-"gmail.com" web clients. However, there are some options:

I'm not sure if it still be possible to leave 2FA off on an account that it decides is low importance (like if it only ever receives mail from public mailing list subscriptions). It seems to be more and more difficult. In the last few years they added logic that (sometimes) disables access if they detect you trying to access it from unexpected IP address ranges you aren't "normally" using (such as when (rarely) travelling). This basically forced me to enable 2FA on all of my gmail accounts.

Last I checked, it was still possible to enable and generate "application passwords" that you can use in any POP or IMAP client, but not for any other kind of access. (Unless you are using business class "G Suite" and your company's admins disabled the application password option.) Most of my accounts are still using application passwords, and I occasionally get emails about "improving" (scare quotes) my security by disabling them, but maybe I'm just grandfathered in? (I could log in with my browser and check what settings options are currently in gmail's settings, but I disable both javascript and cookies from most places by default (especially Google), and I don't feel like going through the hassle of temporarily opening holes in my personal security policy right now (and tracking down passwords and TOTP secrets for "second factors") just to refine this comment.)

Finally, if nothing else works, I try to maintain some instructions on how to use oauth2 to maintain a normal local UNIX email account routed through gmail, as a kind of public service. See https://mmogilvi.users.sourceforge.net/software/oauthbearer.html. This requires carefully setting up a lot of little details, but once it is working it seems to be possible for cron jobs to keep using a renewal token for years without manual intervention. The instructions, scripts, patches, etc could probably all use some updates, cleanup, and streamlining, but I think all the currently-critical tidbits are there. Including a workaround for google's recent decision to disable so called "out-of-band" initial token acquisition a couple of months ago. I use this for a single account, just to make sure I notice and can try to document it the next time google decides to break something.

Comment Memories... (Score 2) 24

I'm too young for the 1960's original, but there was a Scientific American article about how to write a clone of Spacewar back in the late 80's, probably in one of the regular "Computer Recreations" articles. Most of the articles were interesting - this wasn't the only thing I learned from them. It might have been February 1987, but I'm not sure (the table of contents doesn't go into enough detail, I can't find a good index of the computer recreations articles online, and the printed version of this issue is missing from its place on my shelves - maybe because it is one I made relatively extensive use of?)

I did get a few versions of the game working on my hot new 80386, and it was kind of fun to play it against siblings. The development difficulties were more about the development tools and hardware I had, not with the game itself.

One option was GW-BASIC that came with MS-DOS 2.1. It had no knowledge whatsoever of the Hercules Graphics Card clone I had. Hercules was basically like a MDA (text-mode-only) adapter, but had some extra RAM and bypass circuitry around the core 6845 CRT controller chip to enable a "graphics mode" that basically dynamically reads font data out of character-position-based offsets into RAM rather than the intended font ROM, giving the appearance of a 1-bit-per-pixel graphics mode... However, basic didn't now anything about this card, and everything had to be done with individual peek's and poke's for individual pixels (no efficient line or polygon drawing utilities), which is extremely slow in an interpreted language...

Another difficulty was that the documentation I had (from a PC hardware books and a CRT controller book) for programming the HGC was incomplete. It mentioned the graphics enable bit in I/O port 0x3b8, but not the "enable the enable" bit in port 0x3bf, nor the altered 6845 settings when in graphics mode. As a result, I had to resort to a hack to get it into graphics mode to run my own programs, where I would run a demo/advertisement program for a CAD program that came with the computer, Control-C out of it while it was in graphics mode, and then blindly type the commands to run my own program. MS-DOS and BIOS didn't know about this graphics card to present prompts as text when in graphics mode. (Eventually, a couple of decades later, I finally stumbled over a website that described the missing details, although such details still seem to be hard to find today. Although I just now found this scan of what might be the original documentation: GB101_Owners-Manual_text.pdf.)

Another option was a UNIX system (Microport UNIX System V/386 rel 3, v 2.1). It has a C compiler that could generate reasonably fast and efficient code, but had no interactive graphics support at all (regardless of your hardware), and its memory protections/etc would make it far more difficult to hack graphics in than with MS-DOS (I never tried). However, partly motivated by a desire to write graphics programs, I used UNIX to develop a C compiler that would target MS-DOS. This compiler eventually worked reasonably well; I even worked on some of my college programming homework with it. If you are curious, I've posted doscc online under: Miscellaneous Software.

I had very limited assembler support for MS-DOS, using MS-DOS's "debug" program, which included a very limited assembler where you always had to supply addresses numerically (not symbolically) and it only supported 16-bit assembly. At one point I developed a hackish GS-BASIC wrapper that would give limited symbolic addressing support using multiple passes (insert placeholder bytes for instructions referencing symbolic addresses initially, then fill in correct addresses in a later pass based on addresses extracted from the first pass), but it was still awkward and very slow to assemble, and it still required really awkward hacks for 32 bit code (write the 16-bit equivalent with comments, add additional missing bytes between instructions (you needed to understand the machine language fairly well), etc). Later (roughly as the compiler was nearing completion), I wrote my own assembler and dissassembler in C, which were monumentally faster and easier to use... Even with a good assembler, assembly is still too low-level to develop anything substantial. The most notable thing I used these for was to develop my own "DOS extender" to support running 32-bit code (from my compiler) under 16-bit DOS.

Many decades ago most equipment (computer or otherwise) came with full documentation, circuit diagrams, development tools (for computers), etc. That was starting to die out by the 80's, and I still lament its demise. At least Linux, open source, and various documentation available online mitigates this somewhat.

Comment Re:So lemme get this straight (Score 1) 248

That's a good list. It may be missing various things such as anthologies of shorts, some propaganda films Disney produced during WWII, and a few sequels, but it is probably reasonable for this list to leave those things out.

For anyone interested, there was an excellent series of articles alternating between original stories and Disney's interpretations, written by Mari Ness at tor.com a few years ago. See https://www.tor.com/tag/disney-read-watch/. The last part of the series focuses on original animated films without separate source material.

She also rewatched Pixar films separately, but I can't find a great list that is both complete and limited to her articles. The closest are: https://www.tor.com/author/mari-ness/ and https://www.tor.com/tag/pixar/.

Comment Supposed to trade "monopoly" for "secrets"... (Score 2) 63

"It's written very carefully and cleverly to not disclose absolutely everything,"

Doesn't that defeat the whole point of granting a patent in the first place, where the whole reason to grant the temporary monopoly is to get people to publicly reveal the trade secrets rather than keep them secret? Sounds like they didn't hold up their part of the deal, and maybe the patent should be invalidated.

Comment Failure on many levels (Score 5, Informative) 123

I work for a credit union. I'm part of the team that is responsible for cutting off access of terminated employees. When such a ticket comes in, it usually has a terminate date of some time in the future (for those cases of voluntary separation where the person gave notice). Occasionally, the ticket says IMMEDIATE, which is code for this person was fired, please cut off their access ASAP.

When I get such a ticket, I drop what I was doing and immediately disable their AD account. This blocks them from logging in to any work computer, and it also cuts off access to the VPN. There's a number of other steps to take to completely clean the user out, but disabling their AD account effectively locks them out and the rest of the stuff can be handled in due time.

The sort of thing described in this article would not happen under my watch.

Comment Re:This is easy (Score 0) 75

Well of course there are nuances like perhaps hiring someone like cloudflare or github to help distribute your data to the rest of the world, as long as they aren't keeping your primary copy. (Especially in some closed source format - remember the bitkeeper brouhaha a few years ago that triggered the development of git in the first place? Github at least presents the main data (source code) in a form that is fully supported independently by open source, even if not tangential data like bug tracking.)

Also if you occasionally but rarely need to ramp up to very high CPU usage, temporarily renting that capacity might be useful. Although note in the case of github distributing source code, there is no fundamental reason they need to allow running arbitrary user-supplied code/scripts on their servers (aka, cryptominers), especially in the free tier.

(Honestly, I thought these kinds of things were obvious implications and refinements of my original opinion, but apparently not.)

Comment Re:He made the right decision (Score 2) 124

The "Cluster Slack" problem from the summary (and to some extent edwdig's comment above) was a real issue with FAT16. Any volume over 32MB (not even 1 GB) runs into that issue under FAT16. It is probably the main reason FAT32 was extended from FAT16 in the first place.

But with FAT32, you can have a much bigger volume before "Cluster Slack" (or "internal fragmentation" or edwdig's "wasted a lot of disk space") becomes an issue. Based on 28 bit cluster numbers as described in https://en.wikipedia.org/wiki/... , https://en.wikipedia.org/wiki/... and andymadigan's grand parent comment it should be possible to support volumes with a size of about (2**28)*512 = 128 GiB before you have to increase the 512 byte cluster size (thereby increasing the per-file wasted disk space or "Cluster Slack"). Most modern filesystem types actually prefer to use 4096 byte "blocks" or clusters for a variety of reasons, not 512 byte, which significantly increases the volume size limit before the cluster size would need to be increased further.

A larger cluster size could reduce metadata overhead, although at 4 bytes per 512 byte sector, is already below 1 percent. Larger clusters could also reduce the performance impact of bad "external fragmentation" with large files, but see below. Maybe some format utilities artificially increase the cluster size more than is really necessary or appropriate, by focusing too much emphasis on these concerns?

That said, there are probably other scalability issues with really large FAT32 drives. Performance of linearly searching the FAT itself for free clusters it can allocate for more or larger files, or to total up for "total free space". It has no built-in mechanism to minimize "external fragmentation" as files are created/destroyed/extended, etc. (These could be partially addressed in well-designed drivers using RAM without changing the on-disk format, though.)

Comment Why github-style "pull request" workflow? (Score 1) 12

I'm not sure why the "pull request" style code review workflow is so popular. This guy's blog entry discusses UI and technical issues with it, although it is very long: https://gregoryszorc.com/blog/... He clearly understands how things currently work internally, and the issues with them much better than any other writeup I've encountered.

Personally I like gerrit's review style much better, although the above blog discusses the possibility of developing something even better.

Comment End-to-end encryption (Score 1) 88

Any particular hop's link layer security shouldn't matter much as long as all marginally sensitive traffic is encrypted end-to-end with a high quality encryption protocol. And while it is somewhat harder to intercept and mess with unencrypted traffic using wired connections and/or hooking interception into the VPN provider (and/or close to the ultimate endpoint) instead of the hotel wifi, it generally isn't all that hard (certainly not from a theoretical point of view).

Other parts of the FBI seem to be actively arguing (or even fighting) against "real" security. For example a couple of years ago: https://it.slashdot.org/story/...

The cynic in me half suspects this new call for using VPN's from hotels is an indirect way of pursuing this same weakened back door security argument: "See, when it is 'reasonable' we support increased security," (while ignoring how weak the additional security of a VPN really is)...

Comment Similarity to synthetic aperture radar (Score 1) 63

It is also somewhat similar to the pysical setup for getting sythentic aperature radar (SAR) data: https://en.wikipedia.org/wiki/... In the sense that it it is working with a single-dimensional signal of data.

But differences include:

  • * Highly doubtful it measures timing of visible light to the fraction of a wavelength necessary to include phase information.
  • * Doesn't necessary correlate multiple pulses from different positions together, and especially not totalling with phase information to cancel out incorrect correlations.

(Disclaimer: I've done a bit of work on algorithms for processing SAR data a few years ago, but not a lot of such work.)

Slashdot Top Deals

"The medium is the massage." -- Crazy Nigel

Working...