Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Comment Do The Developers Understand The Platform? (Score 1) 402


To me this is the crux of the question: I have seen developers who were perfectly capable of managing individual servers due to long experience and having to perform DBA duties in the scale of the systems they have dealt with. I have also seen developers who know a language but complain about the SQL server when they write queries that run like batch processes through lack of understanding about the way the systems that they are writing generic SQL into work.

The first type I would probably allow access to the production system - provided it was not widely outside the developer's experience and did not have an uptime requirement meaning that it had to be strictly controlled and tested, the second is type is exactly why I would never allow a lot of developers near production systems - small scale or not.

Comment Re:For me (Score 1) 402

I can see both sides of this, however, the real answer is that it depends on the environment required for testing and required for development:

1) The easy scenario:

We clone the live servers from snapshots or disk files and make them available to a test system which is a mirror of live but is only accessible via private network - no ip changes, config changes because of hostnames embedded in software or databases (either rightly or wrongly). The downside is the accessibility - you can't just bring it up on your main lan as it will clash with the live system.

2) The slightly more difficult and more time consuming scenario:

We clone the live servers as a base config, spend some time configuring them for the live environment so that they don't clash with the live system, this breaks things which we analyse and fix, tracking the changes so that we know what we are doing next time and can maybe script some of then. Once we have it we have to either have a data transfer method to ensure the data is up to date or re-do the work from scratch. Data transfer methods depend upon the table structures and are therefore likely to be version specific or require data migrations changed.

3) From this you can then enter the very time consuming scenario:

Neither of the first two are particularly difficult if you are developing the app and therefore know what pieces are server specific, however, there are likely to be large environmental differences between your live and test systems - not the least the traffic patterns and system utilisations. Test systems also tend to be cheap hardware or virtualised servers setup as quickly as possible whilst the live environment could involve clusters of servers, multi-site failover/load balancing/caching engines all of which can impact the functionality of the app. The closer you want the app to be live the more you have to mimic.

Comment Re:As a non-developer, this is what I see (Score 1) 216

That doesn't sound ideal but without knowing the network and its use it could still to be a design issue. There are ways to get it to work depending on what the other switches were and what the network traffic requirements are for the network service.

The obvious solution is to distribute the network to keep the load as local to the edge as possible preventing the throughput all hitting the core, distribute the connections between switches too - more of a mesh design rather than a star, use redundant links with spanning tree (preferably better than old STP - RSTP or better) to detect loops and block looped connections (these will save your bacon when switches do fail). On many networks you have to do this anyway as there is NEVER enough bandwidth for everything that might be wanted unless you have over specified and have potentially wasted money anyway..

A network that is over specified for convenience or for future proofing has to be justified,

Comment Re:IBM (Score 1) 124

They probably did. Certainly the "closed source" team seem to have won the argument inside Novell as OES did have CIFS support based on eDirectory. It also had a crap eDirectory SAMBA integration - not alone, half the other functionality was a pain. The whole thing was a poor and not ready - its probably ok now but its at least 3 years too late.

Comment Re:Going Nowhere (Score 1) 124

As long time Novell user - I have, in general, to agree. I would state that "legacy product lines" isn't something I think should all be dumped into a bin, there are still some good things here. Specifically their ZCM Suite (now completely re-written so that no eDirectory is required but is now facing severe fire from Microsofts own management suites) and IDM products (excepting their use of eDirectory) aren't too bad. There are equally some aspects that should be dumped because of being obsolete but are relics of Netware being a market leader 15 years ago where Novell should have woken up and smelled the coffee.

The shame is that SUSE is still one of the best distros and deserves the recognition for it. Many capabilities released into the community would not have been where they are now if it hadn't been for the work put in on them both pre and post Novell.

Comment Re:IBM (Score 1) 124

I fail to see how the SAMBA issue impacted Novell. Novell already had a CIFS development that was as capable if not more capable than SAMBA (certainly in terms of concurrent connections, compatibility and throughput though management software was questionable...). It was a closed source product and part of an existing revenue stream. They effectively ditched SAMBA development to push their OES product (basically Netware migrated to Linux) with integrated CIFS. Unfortunately they hadn't got this product stable so many people who used Netware for years have finally left in droves to move to either all Microsoft systems or other Linux products (mostly the former) due to skill sets.

Comment Re:Authentication (Score 1) 99


Yes it would and unfortunately its straightforward, however, there are lots of levels of mitigation in the infrastructure if network admins understand that it is there and use it - whether this is the case for the cloud is another matter.

What we are saying in a number of these posts is quite simply - "I don't trust core services on the network I'm connecting to". The response is, "Don't connect to that network or allow that network to connect to you".

Firstly - do you trust your router or DHCP server to supply DHCP without contamination as this gives the paths to the image source and a number of other details (including in some cases a level authentication to those sources)?
Secondly - do you trust that you can reach your sources for the disk images without contamination (is your TFTP, HTTP or iSCSI source really your source, is the image on them the one you put there or has it been changed/modified/rootkit installed/etc?
Thirdly - when you have the image installed, do you trust that you can access systems and services without contamination (malware, virus, etc)?

If you don't trust any of the above then you either need mitigation or need to not connect to the network. I can see how this technology is a great improvement to PXE in the private LAN, or secured network hence my comments, however, whether it is "cloud-ready" is another matter.

Comment Re:Authentication (Score 1) 99

It's still an omission. If you use PXE for remote administration (instead of using it for completely diskless operation), then there is local data which can be compromised by a hostile PXE payload. How hard would it have been to verify a cryptographic signature against a public key stored in the BIOS configuration?

Yes it was missed in PXE, however, based on the context of when PXE was developed I doubt we thought we needed it. The point of gPXE is that PXE hasn't been developed in line with changes in the computing arenas and that these omissions needed addressing.

I've checked further and even though there is a level of authentication built into the command line it doesn't yet have enough development to support 802.1X in gPXE, however, WEP, WPA and WPA2 are now supported so the remote boot from wireless can be undertaken securely. The things you mention are not exactly the point of the protocols like WPA2 and the encryption associated with them in the wireless realm and with 802.1X in both realms which enable authentication to the network. They won't stop a faked PXE image from a poisoned arp or a MIM attack on the http server or TFTP server but they are about securing your network so that a Hacker's device finds it a lot more difficult to get on and do those things you mention. The fact they are developing gPXE into something far more capable with support for HTTP, iSCSI, Wireless (encrypted) and others deserves a good round of support for trying to move things forward though its probably not where it should be yet...

Comment Re:Authentication (Score 5, Informative) 99

It wasn't designed for it - PXE boots without authentication on the client so that the hardware gets the image thinly and then auth takes place when the OS is installed. It assumes control of the local LAN is in place and it is trusted. If you are looking for auth at this level you'd need to look at authentication to the switch or wireless on the network - pre-authentication using something like 802.1X. I'm not 100% clear but I believe gPXE has something that probably covers that in the docs as it has scripting capability pre-receiving DHCP addresses (at the level for wireless authentication and possibly 802.1X)...

Comment Re:Filters? What filters? (Score 1) 207

You think that you can't detect a proxy on dynamic ip? Think again...

As soon as you see URL requests to IP Addresses that are not known as the ip's for the URL you can pretty much mark it as a proxy - and if you think a request for a URL through a proxy completely masks the URL requested then are living in a dream world.
There are also other tells in proxy software - how the url is constructed or the site redirects the data.

You could use SSL to the proxy as well, however, some firewalls are acting like Man in the middle attacks to decode your stream and still see the proxy.

Yes it makes it more difficult to filter traffic but you are seriously underestimating system capability in modern firewalls. Modern firewalls are investigating your http traffic on all ports and at the higher layers of the TCP stack. This allows good sysadmins to build signatures of traffic which it is likely could detect your proxy because of its nature.

Comment The lawsuit is in the post (Score 1) 207

This is missing the point.

Schools apply filtering, not only to protect children but also to protect themselves as organisations. If you have no filtering then all it takes is a overzealous parent and a lawyer or journalist. The school's reputation is sullied affecting intake of students, funding and in all likelihood the long term viability of the school without being put into special measures and having control taken from the governing body running them.

Schools pay £000's to filter traffic because the consequences of not doing it are worse for the school and therefore the students that they do manage to educate.

Comment Making money from Novell? (Score 1) 144

Unfortunately I suspect the only way to achieve this is to change the name and dump half the company in favour of marketing.

They have some of the best software in their fields in Suse, IDM, Zenworks, Platespin and a good vision but not the resource to support it properly. They market themselves worse than anyone in the market and always have. Worse, the shame of it is that every time you hear about Novell people think about a company from the 1980s that had Netware which, whilst great in its hey day became a turd when they tried to pretend it could do something other than file serve. The bugs have been thick and fast ever since.

Open Enterprise Server is another turd - take a good linux and make it run like Netware. Forget it, eDirectory spat on AD but wasn't compatible enough with Windows services (I wonder why MS prevented that?)... These technologies are dead and I think even Novell recognises it (Zenworks now uses Oracle/Microsoft SQL Server instead of eDirectory and the like).

After being a long time user of Novell products I have to say, their Linux is great, their Desktop Management and Virtualisation Management is really very good (as it manages Hyper-V, VMware and Xen heterogeneously - including, with platespin all the conversion capabilities), their Identity Management talks to a ton of things and works unlike everyone else's that I've used but their old core (Netware, NCP) and associated technologies needs dropping like the steaming turds they are. Then they need to be able to sell in competition with the rivals which they've never managed.

I have to say that this might be the best thing that has ever happened to them if it acts to focus them...

Comment Contract Contract Contract (Score 1) 730

That's why you specify clauses for lost data, breach of confidentiality, ensure non-disclosure agreements with any staff who have access, etc, etc in the contract you take out. That way your support company has to ensure that breaches in your security are minimised (nothing is 100% secure ever and this has to pass the test of reason...). To be honest, when you ask for a contract covering those liabilities and cannot get it - you need to pull the plug. They will either be able to handle these eventualities or they aren't worth trusting.

Comment A better Torpig (Score 2, Interesting) 294


random speculation

So if you take the paradigms of open source and apply the benefits of free and open criticism of a project then the ultimate change of this paper should be a better Torpig. As such, I wonder how long it will be before some of the methods mentioned in the paper that made Torpig vulnerable to takeover will quietly disappear...

Torpig will doubtless allow updates to itself - allowing for current C&C commands to take varied action for example. Updating the infected machines with code that is less resistant to domain flux and hence preventing the injection of other C&C servers may be something achievable. After the publishing of a paper like this I'd be unsurprised if the code was not already undergoing update and that some of the methods in the paper weren't already out of date.

Then again, I do wonder if publishing this at this time is due to the botnet already having moved on and therefore the techniques not longer available. Publishing may otherwise be a little irresponsible if the agencies involved on the article are still using the techniques mentioned.

Then again, there are multiple other reasons for publishing this.

/random speculation

Slashdot Top Deals

Eureka! -- Archimedes

Working...