Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment Re:what is the issue exactly (Score 2) 15

It sounds like its effectively licensing M365 so that third party resellers can compete in the cloud service space by running the M365 platform on their own hardware in other regions of the world, allowing them to compete against Microsoft's hardware directly while still licensing Microsoft's products which most major organizations use to run on their hardware instead.

The following linked above by schneidafunk seems to explain it decently: https://www.computerweekly.com...

Comment Re:Let the market decide (Score 2) 428

The problem can be summed up pretty nicely. Improper land management. aka agriculture, is at the heart of what caused the 1920's dust bowl. No one really thinks about that in these conversations, and how big of a problem improper agriculture causes.

Now what if I told you there is a mythical thing that undoes the damage of agriculture, restores nutrients to the land, reduces greenhouse gasses, and ultimately undoes the damage humans cause? Guess what, it involves the very thing our legislative body is considering protecting: Regenerative Ranching. But it's actually not mythology, go figure. There is plenty of proof that it works, and not only does it heal the land, but it also heals the wildlife living around the land used for the ranching. I highly recommend anyone curious about the subject look at a regenerative ranching YouTube channel by Greg Judy; https://www.youtube.com/@gregj....

But this is a nuanced subject, and this is where activists tend to get lost. Nuance. Take cows, people complaining about the negatives of them are generally ignorant and focused on one very specific aspect of them, which is concentrated mass-breeding farms and oversized cows (over 1 ton). Anyone who's spent time around these ranches can have an idea of how disgusting they are, and the quality of the land they tend to exist on. The weight of a mammal used in ranching also matters, because the bigger it is, as they tend to cause damage more than helping nature repair.

This is one of those rare cases where our legislative body is actually considering a decision that could be wise, though I don't know because I don't have all the data. Lab grown meat is a great potential concept for a long-term space voyage if it had hydroponics on it, but for the health of the planet we live on it very well may not be. Look at the other damage humans have caused to create farms where they should not be by destroying the rain forests? People do it because they want more accessibility to good meat, maybe something lab grown can help with, but alongside of restoring a lot more of the ranching land that currently doesn't get used as it once did.

The information is out there for those who don't blind themselves to a cause via emotion or agenda rather than using peer backed non-obfuscated data.

- Written by a random hooman.

Comment Re:Whitewash (Score 4, Insightful) 74

Wrong. Someone does however need to explain why systems like this don't have SRP (Software Restriction Policies) or AppLocker Policies enabled with a ridged white listing rule set.

Servers/Drones/etc like these should NEVER allow any account permission to run non-whitelisted applications. The fact is, barely any code should be allowed to execute, and itâ(TM)s completely inexcusable for them to not be using the whitelisting rules that are part of Windows/Active Directory. In an environment like this where there are ridged policies for doing practically anything related to production software, preventing rogue code execution should be mind boggling easy for one moderately skilled administrator.

Comment Re:Your definition of movie may vary... (Score 1) 207

They are sold on DVD *today*, but not "yesterday". With apparently exception to Star Wreck I believe all of them were on IMDB well before they were anything more than web series. My bad for not explaining the context.

Of all I mentioned, only Doraleous and Associates is too new and has no physical release yet. I don't even know how much fame that show has yet (but yes, it completely deserves it).

I only knew of Star Wreck via torrent, I'd never heard of a physical release of that product back then. When it was released, I certainly had never heard of any of the people involved in it. Even if those folks are known now, I'm not sure how known they were before its release. (I don't care enough to check for this film.)

Comment Re:Your definition of movie may vary... (Score 5, Informative) 207

So IMDB has a clear tradition and quite likely violated it for...

Star Wreck: In the Pirkinning
The Guild
The Legend of Neil
Dr. Horrible's Sing Along Blog

Frankly, if any web series also deserves to violate this rule, it's Doraleous and Associates. Awesome web based show that very easily deserves to be in IMDB, yet currently is not. Not unlike those other awesome shows which also avoided standard publishing paths. I know nothing about The Tunnel, but I think IMDB damn well should have a vetting process for things worth mentioning because they appear to already have one in spirit if not in their own law.

Anyway, these did not originate or target standard distribution channels, yet they got into the IMDB database. Was the only reason those shows got on IMDB is because some of the people working on or for them are well known, and IMDB actually has a flexible policy of supporting those who they like or are well known when clear traditions are broken? I don't think Star Wreck even had known actors, and yet it's original distribution channel was, *gasp*, torrent.

So yes, maybe the folks at The Tunnel kind of have a valid complaint, even if their show is as bad as parts of Star Wreck. Hell, it can't possibly be as bad as Neverending Story 3, which is listed on IMDB and most certainly should be forgotten by all who exist.

Comment OLED may be the reason... (Score 1) 952

This has nothing to do with HDTV. Manufacturers saw the introduction of OLED over 5 years ago and knew right away that it was the end of life for LCD. They feared OLED because it’s introduction strongly indicated that all the research that went into LCD was a waste of money. They had very little incentive to put real research dollars into LCD from this point on, because they already knew, and had talked publically, about OLED being it’s replacement.

Given how well the manufacturing process for OLED has evolved in the past year, I’m pretty sure the end of life of LCD displays as an entire technology is less than a few years away. I’m sure it’ll have the same painful drawn out period where it costs more than LCD for no good reason other than to recapture research dollars before it becomes mainstream and completely kills LCD.

Lets just all hope that OLED becomes affordable in much less time than LCD did.

NASA

Dying Man Shares Unseen Challenger Video 266

longacre writes "An amateur video of the 1986 Space Shuttle Challenger explosion has been made public for the first time. The Florida man who filmed it from his front yard on his new Betamax camcorder turned the tape over to an educational organization a week before he died this past December. The Space Exploration Archive has since published the video into the public domain in time for the 24th anniversary of the catastrophe. Despite being shot from about 70 miles from Cape Canaveral, the shuttle and the explosion can be seen quite clearly. It is unclear why he never shared the footage with NASA or the media. NASA officials say they were not aware of the video, but are interested in examining it now that it has been made available."

Comment Re:from experience (Score 2, Informative) 460

For Mac Deployment, I script the disk partitioning with the terminal version of diskutil, making the Windows partition the exact same size on all machines and have diskutil mark it as MS-DOS. I then use Bombich's OS-X compilation of NTFS-Progs v1 to capture and deploy both Windows 7 and Vista images to the Mac's while OS-X is in use. Students using the computers at the time don't even realize it's happening. NTFS-Progs v2 requires Darwin Ports; I don't believe anyone has made a truly native build of v2.

It's doesn't have multicast, but you can re-deploy Windows while students are using OS-X during a class. For me, students only may screw up a Windows push if they reboot a machine while I'm doing it. Then I start over. I can also do it all while netbooted SSH/ARD the commands for imaging to the machine. Never have to directly visit them.

NTFS-Progs is also open source.

Using my method though, you do have to use "dd" to capture and deploy the Windows boot sector located on what is my /dev/disk0 while the computer is either NetBooted or booted from a firewire drive. I also make my "MS-DOS" partition disk0s2 on a GPT disk while OS-X uses disk0s3. It's more important that the Windows partition be identical on all machines this way than the OS-X partition, so it's just easier to plan on it being the first available partition. The side effect is that if anyone launches bootcamp in OS-X as an administrator and tells it to get rid of the Windows partition, it actually will immediately get rid of the OS-X partition even if your booted from it. Doesn't affect me though, as I strip Bootcamp off my OS-X deployment image. Very few people could launch it even if I didn't.

The terminal version of diskutil I believe is in 10.4.7 and above. Though maybe it was released with 10.4.8.

Comment OS-X Deployment Without a disk image. (Radmind) (Score 2, Informative) 460

So here you go. Far too much conceptual information about a process I suspect almost no one here knows beyond the few that already mentioned it. Enjoy.

So the best I can do is telling you how I do it for about 400 Mac's, and the tools I use. I basically use two OS-X 10.6 servers that host NetBoot images and Radmind, and then Apple Remote Desktop (ARD) on a client to control events occurring on all the clients be they booted locally or NetBooted.

I'll also be up front, if you are not computer savvy, and don't want to be, do not touch Radmind with the idea of using it to deploy anything beyond software to an already existing deployment. Stick with an image based package. If however you are computer savvy, can get around a command line, and need to support an unlimited number of *nix machines, especially in a lab, Radmind is an incredibly strong tool.

I solely use Radmind for both OS deployment and software updates because it's a delta based package and tripwire system which you don't need to rebuild over time unless an administrator makes horrible mistakes without a backup. If I really needed an image, I would have Radmind generate that build for me and then use 10.5/10.6's NetBoot/NetInstall creation tool on the results.

I do not use NetRestore, NetInstall, or any other deployment tools for OS-X. It is a waste of time to constantly rebuild and maintain various images over time vs a delta based deployment system, especially when I'm the only one supporting the image. It may take *slightly* longer to deploy than a sector based image, but the amount of effort placed on the administrator in the long term significantly decreases. Sure, learning Radmind might take a whole lot of time and effort, but the more random and variously configured machines you need to support are, the more attractive it becomes to spend time learning how to use it beyond a software package deployment tool. Heck, the right people behind it could probably support thousands of *nix servers without much of any effort.

You can also reverse the use of Radmind over time to maintain just software packages by making a negative transcript targeting just ".". If you do that, and make sure clients don't see the overall OS level packages, you can update software only without updating the OS at its core.

So radmind has a set of tools that come with it, and I'm only going to mention the most critical of them. One scans a computer for changes. Two other apps takes that scan and either uses it to upload data to a server, or to use the knowledge on the server to 'cause' changes to the client. Another downloads the command lists from the server, and those command lists have knowledge of all the "package" transcripts that actually define almost every file on the computer. Using them all in combination in scripts by someone that knows how to manipulate the results are what can make Radmind powerful.

Up front there are negatives and positives about Radmind:
Negatives:

It can be very complicated.

A lot of the documentation is poor, though it's better today than when I started using it.

Simple mistakes in a transcript can suddenly prevent the client-side app from functioning. Discovering why can sometimes be very difficult. (especially if it's a nested command file level issue that only gives you "Input/Output error" when lapply crashes.)

It only supports network compression, which frankly is worthless. No file-based compression during capture.

Almost any error in a delta file will break process of updating/deploying machines. It really requires you have someone learn it in and out.

The default method of deploying images to massive numbers of machines that may need different builds is unwieldy. There are ways around some of this.

The GUI console in OS-X once you have several hundred transcripts is annoying to use, and creating and using subfolders for transcripts or command files will seriously screw your deployment life up.

It has no GUI on anything except OS-X, so your master Radmind server is best to run on OS-X.

Little about the way I use Radmind is publically documented, I took what Radmind comes with and scripted a deployment system around it.

Unless you script it, doing single-application deployments is tedious.

A full maintenance run will undo all end-user changes unless you are explicitly ignoring that area of the disk.

You have to script it so the computer renames it's self properly after maintenance, best to do by a MAC address client list. Or you have to ignore the file that contains the workstation name.

Positives:
It's open source; BSD licensed.

You can manage any variant of Linux/Unix/OS-X with Radmind. There is also a Windows version, but due to differences in how Windows functions vs *nix makes that variant much less valuable.

It's powerful, but under the control of someone that can script around it, it's possible the most powerful deployment tool out there.

There is practically no environment it Radmind couldn't deploy to so long as we're talking *nix and the client tools are compiled for it. The primary limitation of Radmind is the knowledge of the Administrators using it.

The OS-X server version at least has a GUI which makes configuring machines more object oriented. It's far easier doing .K files with the server GUI than it is with a text editor.

There is enough public documentation for you to learn the basics of how to use it, but any serious use of it as a deployment system requires scripting.

The OS-X client GUI for capturing images is good to use to learn how the terminal commands work, but over time you'll completely abandon the tool and just use scripts to build new delta transcripts. Once you start using multiple command files for nesting complex builds the client GUI will completely fail to function.

It's easy to make any client a replica Radmind server and clone all the delta transcripts/files to it. Technically almost any system could be a replica of your Radmind configuration. Helps to separate your 'staging' environment from production if you just make your productions synchs with staging when you consider new delta's are final.

If you know bash scripting, you can fully make your delta deployments emulate the idea of software groups or deployment groups. You deploy knowledge of every build to your client and then your scripts can filter out what's unneeded.

It supports sha1 hashing for validating files sent to the server or client. You can be sure with this that your delta images on the server are not corrupt. It can also confirm that every file on the client is 100% what it should be, making this tool akin to a system-repairable version of tripwire.

You don't need deployment protection tools such as DeepFreeze. Radmind acts like tripwire in detecting changes, except it can undo them.

A full maintenance run will undo all end-user changes unless you are explicitly ignoring that area of the disk. This makes updated systems as good as if they were newly deployed.

You can update targeted directories vs. the entire OS if you want. Good for updating public user accounts every time they log out.

It has some limited ability to internally understand how to deploy unique builds to systems based on IP/SubNet, and some other system side information. I however do not use this part of Radmind, and push out a base command.K file that contains information about everything which a script then filters down.

As an example, this is how I use Radmind, with some limited terminology.

Command (.K) files are the list of what to do to a machine. Transcript (.T) files are the actual "package" being deployed. "Positive" transcripts are changes you make to a machine, "Negative" transcripts are sections to ignore on a computer.

Radmind starts out with a base.K file. I create a bunch of new ones, which I should mention now, as I recall doing this will instantly prevent the GUI client side deployment/capture tool from functioning.
os.k
os^OS-X_10.4_Client^PPC.K
os^OS-X_10.4_Client^Intel.K
os^OS-X_10.5_Client.K
os^OS-X_10.5_Client^Avid_and_ProTools.K
os^OS-X_10.6_Client.K
department.K
department_name1.K
department_name2.K
software_level_1.K
software_level_2.K
room.K
room^room_number.K
computer.K
computer^name1.K
computer^name2.K

In that list above, os.K contains all .K's that start with os^. That first level of room.K, computer.K, and department.K work the same way. Software level 1 is for software that everyone gets. Level 2 is video production/3d applications that only certain areas need (as these packages eat up far more disk space vs Adobe CS4 and web based apps.). The more specific .K files are the ones I actually put software packages in, the first and second level of nested .K files I *never* put software packages in, but you will probably use it far different than me at first.

So I have a name table list that contains the MAC address of every computer in my origination that I control for both en0/en1 (physical/wireless typically). Inside that file, I list the department, room number, name, etc of every computer. I script it so deployment uses this information to *remove* excess information from command files.

Why I do it all this way is difficult to explain in one go, but basically it lets me do far more flexible deployments where default Radmind deployment methods are extreamly ridged. This is more of an example of how it can be used, not how anyone would use it on the first attempt. Primarily because the GUI tools for OS-X will really help you understand how to use the command line.

Lets see... I also recommend use of "./" in all your paths in Radmind, and to make that your default when you first use the GUI client. Using radmind as your only deployment system is impossible without being able to use relative paths to target the root of a drive that you're not booted from. You can always change it down the road, but I regretted doing that once I realized it could be used for deployment over NetInstall.

Plus, when you capture an OS for the first time, you normally use negative transcripts to prevent capture of data that is in use. You also must do a second one with a negative transcript that ignores an item of no overall deployment value (like ./private/tmp). This second pass I call the exact same as the first, except append "^Offline.T" to the end of it. I always capture system level transcripts in two delta stages; for example "OS-X^10.5.8^Update.T" and "OS-X^10.5.8^Update^Offline.T". The "^Offline.T" file contains all other data not discovered on a system during the first pass. My scripts know when a machine is NetBooted vs booted off the local disk, and "^Offline.T" files which contain items that are normally in use (such as the user account database) are only applyable when NetBooted.

Also, for OS-X Clients, avoid case sensitivity support in both your file system and in Radmind unless you really require it. I recommend going into your Radmind server settings and make it case-insensitive by default. All vendors are very inconsistent about naming directories with the correct case level over time between product versions, and case sensitive may eventually render Applications (or the entirety of your Radmind set) useless and cause you to rebuild from scratch. It gave me nothing but heachachs. Anyway, that was my experience with it during the 10.2 days, I stopped doing that by the time 10.3 came out.

Phew. That's all I care to say right now. Hope it helps.

Comment Windows Firewall and IPsec (Score 5, Informative) 673

I can't speak for the linux side of things, but here's my comments for Windows.

Note that while this is easier to manage with Group Policy via Active Directory, you can use the local group policy settings and migrate them across your lab. My thoughts on this are valid for XP and 2003.

The internal firewall is your first defense, blocking all non permitted inbound random/unimportant information from reaching your machines. Tell the firewall the applications you will be using, and it will dynamically open required ports as the program needs them. This way you don't need to deal with local port management. You want this setup to prevent traffic from reaching IPsec, and for any logging purposes you may have. IPsec's current version doesn't really do packet logging, and is in no way a firewall (Although, I used it for years as a firewall with Windows 2000 and never had any ill-received problems, but they were not on critical systems either).

Use IPsec in pure authentication mode without encryption (unless you have encryption offload cards). You can use it in several ways.

All communication requires authentication:
No computer can talk to yours that is not setup properly. Period.

All inbound communication requires authentication:
All inbound traffic must authenticate or be dropped.

If you lock inbound, but not outbound, your clients can still access web resources and any other computer without issue, but you have completely prevented anyone else from initiating communication with your systems.

IPsec works like this: Generic rules (require authentication from everyone) are over-ridden by a more explicit rule (do not require authentication from whatever.system.local). Generic all IP rules are over-ridden by port rules, port rules are over-ridden by explicit IP address rules or subnet rules. Etc.

For your purpose, I would at least require all inbound traffic to require authentication by String, however this is not secure and anyone with administrator access can rip the password out of the registry. To do it securely, you need to do it by certificate or Kerberos. The kerberos implementation will require active directory, the certificate method will require a full IKE/PKI configured for your area. You do not need to buy a certificate from a place like verisign, you can do it all yourself through your own self-signed certificates. This entire process with IPsec can be automated through Active Directory, but if you don't have active directory, I believe any generic IKE/PKI server can generate valid certificates for your use. It's a lot less work on your part doing it through active directory.

IPsec policies will work between Windows 2000, XP, and 2003, however your key strength is limited based on the oldest OS you use. 2000 will only function with low keys, XP with both low and medium, and 2003 with strong keys and the two weaker keys. Also, you can set it up from strongest key generation to weakest, so 2003 will always talk to 2003 in strong, 2003 to XP in medium, 2003 to 2000 in weak. It may be possible to make IPsec work side-by-side with Linux using Freeswan, or whatever project replaced it, however I never used that program.

One last thing, if your systems are used by untrusted users, considers how possible it is to use the software restrictions built into Windows. Once that is activated and configured well, it becomes very difficult for a local user to run non-authorized software without sitting at the machine and taking it over first. Refer to rules regarding Software Restriction Policies for this.

K.

Slashdot Top Deals

C++ is the best example of second-system effect since OS/360.

Working...