Forgot your password?
typodupeerror
User Journal

Journal: Walking robot controlled by live monkey's brain

Journal by greginnj
The New York Times reports that neuroscientists have created a connection between a live monkey's brain and a walking robot that allowed the monkey to control the steps of the robot as it walked on a treadmill. The monkey was in North Carolina; the robot was walking in Japan. At first the monkey walked in sync with the robot (which it could see on a large screen). Then, when the monkey's treadmill stopped, it continued to make the robot walk. Cue the whatcouldpossiblygowrong tags now!
User Journal

Journal: Virtual Desktop management for schools

Journal by greginnj
http://linux.slashdot.org/comments.pl?sid=238027&cid=19460427

(Score:5, Interesting)
by Prospero2007 (1113755) Alter Relationship on Sunday June 10, @05:41PM (#19460427)

Yes, Yes. Connor is my right-hand man so to speak. He has access to root, but that is because I can trust him. Our school webpage is going to describe the configuration we have in more detail, but in short here is how I have it set up: (Not all students have that kind of access.)

Linux
Students log in with username 'student' password 'student' -Kiosktool + chmod -R a-w on /home/student/Desktop seems to effectively lock down the desktop. Students can't change anything, and what they see is what they get. Kiosk-tool is excellent, but it isn't perfect. You have to manually set certain file permissions for it to be effective. (Operations like eject can't be performed by your average Johnny.)

Windows We have deepfreeze + a limited user account. Windows explorer is effectively disabled, and the 'public fox' extensions is in full effect on firefox keeping the students from downloaded pesky .exe files and changing the browser settings. (Public fox is also being used on the linux side. It's great!)

shared resources

I have a Samba Server set up

Internal Bind9 Just as an aside we have named every computer in the wing and have set up and internal *.imak domain. Every computer has its name prominently displayed. For example, sambaserver.imak is where our public samba shares are located. Zeus.imak is my teacher computer --etc.

All computers have the following on both OS'S so that the students can work collaboratively and the teacher can maintain control:
-Apache Web servers with php interpreters.
-FTP Server
-SSH Server
-VNC Server -Tight VNC on windows
-KRFB on Linux
(THE VNC is cool because it allows the teacher to remotely comandeer student machines. The name resolution makes it easier, but I also have interactive bird's-eye-view seating charts at each teacher desk. --Point and click to take over the student machine. It's neat.)

-Anyway, I don't mean to sound like a wise-guy, but I thought a little more elaboration was necessary. Any comments that will help to enhance my security are appreciated!

Josh Beck
[ Reply to This | Parent ]
User Journal

Journal: E-academic books

Journal by greginnj
E-academic books
(Score:5, Interesting)
by apathy maybe (922212) Alter Relationship on Sunday May 14, @11:33PM (#15332291)
(http://www.revolutionaryleft.com/ | Last Journal: Monday October 17, @08:30PM)

What I would like to see is academic books in an electronic format (on a disc distributed with the hard-copy perhaps) so that I could search the text for a phrase or quote that I did not get the page number for. This would make referencing much easier. Of course having a lot of newspapers and books online from other countries also aids academic researching.

-- DISCLAIMER: Use of this advanced computing technology does not imply an endorsement of Western industrial civilization.

[ Reply to This ]


* Some such texts already exist
(Score:5, Informative)
by golodh (893453) Alter Relationship on Sunday May 14, @11:43PM (#15332336)
See e.g.:

-MIT's Open Courseware at: http://ocw.mit.edu/index.html [mit.edu]
-Textbook revolution at http://textbookrevolution.org/ [textbookrevolution.org]

-Physiscs texts at: http://www.phys.uu.nl/~thooft/theorist.html#langua ges [phys.uu.nl]

-The assayer at http://www.theassayer.org/ [theassayer.org]

-Open content at http://www.hewlett.org/Programs/Education/Technolo gy/OpenContent/opencontent.htm [hewlett.org]

I also know a number of econometric and statistics texts that are also available as free Ebooks, but they are of interest only to specialists.

[ Reply to This | Parent ]
User Journal

Journal: Email Filtering

Journal by greginnj
How I do it
(Score:2)
by SCHecklerX (229973) Alter Relationship
on Wednesday December 21, @11:30AM (#14309742)
(http://freefall.homeip.net/)
Mimedefang + Spamassassin + Sendmail configs. Why use the CPU cycles to analyze email for spam, when you can outright discard most of the stuff right away (milters kick ass in this respect...no need to receive the whole message. First sign of trouble, and BLAM! It's rejected :)

In Sendmail:

1. Enable greet pause
FEATURE(`greet_pause', `1500')dnl
2. Enable bad receipt throttling
define(`confBAD_RCPT_THROTTLE',`3')dnl
3. Obviously, enable privacy flags
define(`confPRIVACY_FLAGS', `goaway,restrictmailq,restrictqrun')dnl dnl define(`confTO_QUEUEWARN', `4h')
4. And of course, set up your mimedefang filter
INPUT_MAIL_FILTER(`mimedefang', `S=unix:/var/spool/MIMEDefang/mimedefang.sock, F=T, T=S:360s;R:360s;E:15m')

In your mimedefang script in the filter_sender subroutine:

1. reject anything in the spamhaus sbl-xbl list.
2. reject from any server that sends a helo that is not a FQDN (just look for a '.' in the name, is all I do...spam software is stupid and helos with single words).
3. reject anything that helos with your own mail server or domain name.
4. reject anything that helos with rfc1918 addresses.

In spamassassin:

1. most defaults are good. Make sure you enable the blackhole checks.
2. configure an account on your sever called 'spam'. Use procmail to write a recipe that will send anything you forward to that account to your bayes spam database.
3. make sure that only you can send to the 'spam' account using filter_recipient in mimedefang.
4. do the same with a 'notspam' local account to fix anything that gets mistakenly flagged. You should use sane settings for discard vs. put in a folder for manual analysis.

Optionally, add milter_greylist to the mix. Greylisting REALLY cuts down on the traffic sent to your servers and hits spammers where it hurts...requiring them to use THEIR resources to queue temp-failed messages.

some stats from my current mail log (home server, not huge volume, but I use the same methods at work with great success). The current log is for Dec 18 - 21.

$ grep -i spamhaus /var/log/maillog | wc -l
354 (rejected for being on the sbl/xbl list)
$ grep -i misconfigured /var/log/maillog | wc -l
32 (rejected for having bad helo)
$ grep 'You are NOT' /var/log/maillog | wc -l
79 (rejected for pretending to be my server in the helo)
$ grep -i 'send to this address' /var/log/maillog | wc -l
8 (people not on one of my mail lists trying to send to the list)
$ grep -i spamdiscard /var/log/maillog | wc -l
115 (stuff analyzed and found likely enough to be spam to be dropped)
$ grep spam /var/log/maillog | wc -l
145 (stuff that was flagged as spam, but not automatically discarded)

So, you can see that of 733 spammy messages in ~3 days, only 260 had to actually be analyzed by spamassassin. In the case of rejections, the sender is notified, so if they are *not* a spammer, they can contact you to resolve their misconfiguration. I reject on the spamhaus lists, and no others, because it is very easy to remove yourself from those lists if you find yourself on them for some reason. The other lists are used in scoring, however, when spamassasin does its thing.
-- --
Replace slshdt with slashdot to email me
User Journal

Journal: Free SW Alternatives

Journal by greginnj
Re:hmm
(Score:5, Insightful)
by zerocool^ (112121) Alter Relationship on Saturday December 10, @07:25PM (#14230765)
(http://www.fredrock.org/ | Last Journal: Tuesday April 13, @11:24AM)

It's getting to the point now in my life where I'm financially stable and can afford to buy the odd game. But, even as such, I usually try before I buy, and that means piracy. For example: I just played through F.E.A.R. It took me about 8 hours to beat it. And, upon starting up again, I've realized that it has no replay value whatsoever. $55 for 8 hours? Thanks, but no; I'm glad I didn't buy it. It's uninstalled. On the other hand, Age of Empires III I downloaded, played, and liked; and I'm going to go buy it.

I origionally pirated my copy of Neverwinter Nights; but because I enjoyed it so much, I ended up buying the retail version, both expansion packs, and paying for all the downloadable premium modules. And I'm talking as they became available; not years later in the ultra-mega-pack for $40. I probably have close to $180 invested in Neverwinter Nights.

Every time I feel guilty about this policy, I end up buying a game and being pissed off about it. The latest example was Doom III - I bought it, and played it, and it too has lackluster multiplayer and no replay value.

It basically boils down to if you make good games, I'll buy. But, if you put out crap, I won't.

However, it should be noted that this only goes for 1.) Games and 2.) MS Office. Now that I work for tech support in the CS department of a university, I have access to the MSDN Academic Alliance copy of Office, so that's now legal, but I used to pirate it. However, I also used to feel bad about it; since I knew that the only reason I was pirating it was because I needed to be able to create word documents for specific purposes (resume comes to mind), and it's what everyone else uses; I'd have been technologically happy with Open Office. But it's to the point where I've found free programs to replace all the little things I used to pirate.

For example; CuteFTP - now I use FileZilla. Eudora - now Thunderbird. Nero - now I use burnatonce; though I'm still looking for a free (beer; possibly speech too) CD Burner that doesn't suck - burn at once burns images, and does it well, but doesn't do anything else. Photoshop - Gimp. Norton Corporate AV - now AVG Free. I don't even remember what I used before Exact Audio Copy. And I want to know where VLC has been my whole life.

I've also stopped downloading TV programs and Movies. Movies basically because I never go to the movies anyway (baby) and anything that's good, I'll buy when it's on DVD (I'd rather sit at home comfortable and be able to pause). TV - now that I have Tivo, I don't miss anything; and I've sort of gotten over the need to archive everything; but should I want to archive, I can always use TiVo desktop, a program to strip the DRM from the files, and re-encode the MPEG2.

So, basically, I'm pretty much proof of "if it's reasonably priced, I'd just as soon buy it". I'm also proof of "If you put out crap, and claim that piracy is hurting your sales, you're wrong: it's either too expensive, or it just sucks".

~W
--
sig?
[ Reply to This | Parent ]
User Journal

Journal: Development Standards

Journal by greginnj
Programming Standards
(Score:5, Insightful)
by clockwise_music (594832) Alter Relationship on Wednesday November 16, @08:27PM (#14048944)
(http://www.metaltheater.com/ | Last Journal: Friday May 06, @01:41AM)
In no particular order:

1. Get a development database, a testing database and production database. Yes, you need all three.
2. Write a few docs explaining each system. Make these are detailed as you possibly can. (This will save you weeks in the long term)
3. Use software revision control. CVS, VSS, whatever, use one.
4. Use a bug tracker. BugZilla, Jira, CityDesk, whatever, use one.
5. Use whatever coding standards the language reckons you should. If java, use sun's standards. If microsoft, use their standards.
6. Write automated unit tests. I don't care if you're not an agile or XP developer, write unit tests. Check out Junit, or Nunit, or just write your own.
7. Setup some code so that you can check out ALL code from the source code repository and compile it by ONE COMMAND. Eg, "make" or "ant" or "maven" or whatever. This will take time but is worth it.
8. Have a naming standard for database tables. This will make your life SOOOO much easier.
9. Read thedailywtf.com and don't do anything that is posted there.
10. Write specs for your new developers. Please write specs for your new developers. Don't just say to them 'fix this up'.
11. Make sure code isn't hard-coded to a particular directory. Everyone does this. Fix it. (Might be part of step 7)
12. Create your own standard config files.
13. Have code reviewed by peers. Don't be a bastard but be nice when picking on people's code.
14. As mentioned, comment your code but use the language standard. Java - javadocs, Perl - perldocs, etc. These are cool, but don't get too carried away. Nothing can replace a good spec.
15. Ignore what most people say on Slashdot. (Except for me, of course).
That'll keep you busy for a couple of months! Doing thiswill make you well on the way to having a pretty high level of coding quality. Most companies don't do all of them. Good luck.



1.a Get a test application server and production application server. Yes, you need both. The development server would be the developers workstation.
6.a Formalize the testing process by people other than the original developers.
6.b Write test cases.
6.c Do regression testing. Especially for "transperant to the user" changes.
8.a Have a naming standard for table columns. This will make your life SOOOO much easier.
11.a Where you do need hardcoded directories, externalize the locations in configuration files.

--

- - - - - - - - - - -
I am a programmer. I am paid to produce syntax not grammar. Deal with it.



Great post. Some additions...

1. Use a code reviewer like FxCop for .NET stuff. You can build custom rules that your team decides on, and apply them to each program.
2. Automate as much as possible with things like Ant or NAnt. The less people do, the less room for errors or discrepancies from project to project.
3. Use something like Cruise Control .NET which will tie everything together, run coverage reports, automate builds, etc.
4. Make a Utility library so that they have a common place to get repetative processes. (e.g. we have one that validates whether a number and address is a valid store for our company; it is used in 75% of our programs, but only coded once.) You could incorporate some common error handling like log4net or log4j into the utility.


Here a just a few things that come to mind:

1. Version Control - find a VC system everyone can agree and use it religiously, whether for scripts, programs, or even web docs. I've use CVS mainly, with a little Perforce, and Subversion is good so I hear.
2. Coding Standards - depending on how many and what type of languages you have, you'll want to develop standards for how code will be laid out and documented that will make sense and also make it easy for somebody to move from one code base to another with as little trouble as possible. You can be as detailed as like, right down to conventions for naming subroutines and indentation, but don't get carried away or you'll stifle creativity.
3. Documentation - not just documenting code (which any programmer should be doing reflexively), but documenting system flows and procedures. It doesn't hurt to throw together text docs on your more important scripts/programs, outlining where they live, how they're run, etc.
4. The Brain Book - there's nothing I hate more than starting a new job and having to learn all those server names, IP addresses, what I'm supposed to have access to, where in the directory tree the stuff I works on live, what types of DBs we use and their versions, etc. So I developed the Brain Book, where I would write these things down as I learned them, to have a point of reference. It's a good idea to do this for all your major projects, so as new people come on, they can spend less time learning their way around and more time coding.
5. Code Review - everybody's coding style is different and sometimes they don't mesh well or there are divergent opinions on how a particular task should be coded out. Get your programmers together in a room and hash things out as a group. It will provide everyone with a say and may open up some people's eyes to new ways of doing things.


on this particular subject. i believe code complete 2 came out "reasonably recently". that said, were this my task, i'd say the following:

1) document things thoroughly using a tool like doxygen. there is no excuse for interfaces not to be thoroughly documented
2) adopt a standard naming convention. in java, this is easy -- just use the default. in other languages, you'll probably have to make your own up.
3) pick an indentation style. it really doesn't matter which since tools like indent can convert between them almost painlessly. all code that goes into the repository is run through indent to put it into a standard format
4) require that code compile cleanly with no warnings at the most anal retentive compiler settings before it can be checked in unless there are good reasons to ignore the compiler warnings
5) average devs are only able to commit to the "head" fork (or equivalent in your sccs). the code is not committed to the "real" fork until it passes whatever tests you have
6) incorporate tools like valgrind into your testing cycle --- they should come back largely clean. if they don't, things need to be fixed unless there's a really good reason not to.
7) people who check in code which breaks cvs or, upon a code review, are found to not sufficiently adhere to your guidelines owe their dev group donuts.
[ Reply to This ]


We don't need no coding standards!
(Score:5, Informative)
by MythoBeast (54294) Alter Relationship on Wednesday November 16, @08:28PM (#14048949)
(http://www.mythologicalbeast.org/ | Last Journal: Monday September 08, @01:27PM)
Ok, so the subject is misleading. As a C++ contractor with about 15 years of experience in a broad variety of shops, I've been exposed to quite a lot of different coding standards, from severely strict where they told me where and when I can use the spacebar, to the completely non-existent. Of all of them, I have found the GNU coding standards [gnu.org] to be the best balance between the flexible and the legible.

A few other details that I'd like to add. K&R braces were invented, not by K&R but by the guys who typeset their book. It is a severe roadbump to try and read code where the braces are at the end of an if statement instead of vertically alligned.

Try spinal alignment for variables. Most people align their variables like this:

int something;
void somethingelse;
longobjectname theThirdThing;
Those with more of a clue align them so that you can find the variable name easily in a mess of them:

This puts some major space in some cases between names and short type declarations. Try aligning them like this:

...........int..something;
..........void.*some thingElse;
longobjectname..theThirdThing;


The problem with this technique is that, if you ever post your code on Slashdot, you'll have to replace spaces with dots and spend fifteen minutes trying to get it to render correctly because SD doesn't support a simple PRE tag.

Other tidbits that have helped. camelNotation rules. Don't use hungarian notation, it doesn't work in a severely object oriented enviornment. Instead, preceed your variables with a single letter that tells you where it's declared. l for local, m for member (of a class or struct), g for global, that kind of thing. I've seen "my" used for member and "the" used for static very effectively, also, but stick to one.

Most of all, good luck. Remember that a lot of people's beliefs in this matter have no foundation except for what they've been doing for years. I have faith in my standards simply because I've seen what happens when you don't follow them, and that's mostly confusion.



Gnu coding standards:
http://www.gnu.org/prep/standards/
User Journal

Journal: KDE apps

Journal by greginnj

Must-have KDE apps

(Score:5, Insightful)

by billybob2 (755512)

Alter Relationship on Sunday November 06, @12:21PM (#13963358)

The real issue is who is going to pay for the next generation of KDE development if SuSE isn't going to pay.

Mandrake, Kubuntu/Mark Shuttleworth, Trolltech seem realize the value of KDE's superior architecture, on which many must-have KDE apps have been built. These apps don't have any gnome equivalents that are nearly as useful and feature-rich:

AmaroK music player [kde.org] -- Steve Jobs' nightmare, the single greatest threat to Itunes on the Free Software platform.

K3b [k3b.org] -- Best CD and DVD authoring program with intuitive wizards, on the fly transcoding between WAV, MP3, FLAC, and Ogg Vorbis, normalization of volume levels, CDDB, DVD Ripping and DivX/XviD encoding, Save/load projects, automatic hardware detection/calibration and much more.

DigiKam [sourceforge.net] -- The most feature-rich application for digital photo management.

Wireless Assistant [kde-apps.org] -- Most user-friendly app for connecting to wireless networks. Managed Networks Support, WEP Encryption Support, Per Network (AP) Configuration Profiles, Automatic (DHCP, both dhcpcd and dhclient) and manual configuration options, Connection status monitoring, etc

KDE Education [kde.org] -- Educational (Science, Literature, Geography, etc) programs for children. Could play a big role in whether school districts decide to use Free Software in their classrooms.

Konqueror File Manager [konqueror.org] -- Embeded image/PDF/music/video viewing (via KMPlayer) and a tree-view arrangement of the filesystem familiar to Windows users (Nautilus doesn't come anywhere close)

KDE Control Center [kde.org] -- Centralized location for desktop control. Controls _all_ common aspects of the KDE applications: language, power settings, special effects, icon and window themes, shadows, shortcuts, printers, privacy, etc. This is what makes KDE so well integrated -- all KDE apps respect changes made here, so they all have the same feel. SUSE has even made YAST a module of the KDE control center so users can access distro-specific settings from here. Compare this to the dismembered approach Red Hat (and other gnome distros) have been forced to adopt in the absence of a centralized gnome control center. (ie. a bunch of individial programs named redhat-config-**** that nobody can ever remember)

Seamless, transparent network file access [kde.org] on SMB, FTP, SSH and WebDav networks from _any_ KDE application.

Kaffeine [sourceforge.net] -- The most polished FOSS movie player.

MythTV [mythtv.org] -- The most advanced analog and digital TV viewer/recorder in the Free Software world (built using QT).

Baghira [sourceforge.net] -- A native QT style that faithfully imitates OS X eyecandy, aimed at new users coming from the Mac world.

Klik [atekon.de] -- Gives non-expert access to bleeding edge versions of apps without requiring any compilation or permanent installation.

KDE and QT also make up a technically superior platform for developers, drastically lowering the learning curve for programmers new to FOSS development. KDE apps can be built from the ground up using the best development tools in the Free Software world (which also happen to be built on QT/KDE):

Kdevelop [kdevelop.org] for syntax highlighting, application templates, and project organization.

QT designer [trolltech.com] for GUI development

Quanta [kdewebdev.org] -- Rich web development environment for PHP, CSS, DocBook, HTML, XML, etc with advanced context sensitive autocompletion, internal preview and more.

BKSys environment [freehackers.org] for a complete replacement of the autotool chain (libtool+automake+autoconf+make) that will make dependency a whole lot more simpler and efficient.

Cervisia [kde-apps.org] -- User-friendly GUI frontend for CVS.

[ Reply to This | Parent ]
User Journal

Journal: scp and avoiding symlinks

Journal by greginnj
by Jerf (17166) Alter Relationship on Monday September 05, @01:25PM (#13483987) (http://www.jerf.org/iri/ | Last Journal: Saturday August 18, @12:04PM)

Leaning on tar is probably a better solution anyhow.

I don't know your exact needs, but you can make this easy on yourself with a very short shell script, or even just an alias. Instead of using "scp", use "ssh" directly, something like:

ssh [your login here] -C 'tar c $*' | tar x

This runs "tar" on the remote server, $* is trying to convey the idea of passing all the params of the script/alias to the remote tar, and outputs to stdout. ssh redirects stdout across the network to its own stdout, which is then piped to local tar for extraction. -C compresses the stream, which is probably Good Enough, but under certain circumstances (CPU time vastly outweighs transfer time, think modem transfer here) it can be worthwhile to add bzip2 into the mix:

ssh [your login] 'tar c $* | bzip2' | bunzip2 | tar x

Tune the script to your needs, and the reverse script is pretty easy too; ssh will redirect its stdin across the network just as easily if you use it in a pipe.

Note there is never a temporary file.

I belabor how this works because it took me a while to fully grasp how cool it is that ssh makes the Unix pipe idea fully work across the network. Note you can set up pipes on the remote side in the ssh command if you escape it correctly (apostrophes will usually do, but shell escaping can get evil). scp is more "convenience script" than "fundamental tool".

--

my weblog [jerf.org]
User Journal

Journal: Aliases with ssh

Journal by greginnj
by ScriptedReplay (908196) Alter Relationship on Monday September 05, @12:30PM (#13483657) Hint: use aliases in .ssh/config to make your life easier. Something like:

Host alias1



Hostname hostname

User username

[add extra options like authentication method, X11 forwarding, agent forwarding, private key to use and so on]

then you do scp file.tar.bz2 alias1/path or fish://alias1/some/path (and get a password prompt). Less typing - and works with bash completion too.
User Journal

Journal: Slowing down dictionary attacks

Journal by greginnj
by RAMMS+EIN (578166) Alter Relationship on Monday September 05, @12:58PM (#13483804) (http://inglorion.net/ | Last Journal: Sunday August 28, @03:10PM)

I had an instance of an attacker running a dictionary attack on my sshd the other day, and I was surprised by how many logins he could test per second (he was using multiple connections). I asked on #openbsd about ways of slowing down such attacks. This is the advice I got:

1. Run sshd on a different port. The scripts won't find you there. I don't like this option, because it requires me to specify the alternative port every time i ssh, scp, rsync, or svn. It's still about the easiest and most effective method.

2. Limit the connection rate to the port you're running sshd on. In many scenarios, it won't hurt you if you can't connect to it more than once in 5 seconds, but this will make a dictionary attack from a single machine very tedious. In OpenBSD 3.7, you can use pf with max-src-conn-rate.

3. Use a script like DenyHosts [sourceforge.net] to monitor your authentication log, and add suspicious hosts to a block list (either temporarily or permanently). This looks like a very nice solution to me.

4. I got this one from my girlfriend: disable password authentication and use key-based authentication instead. This is my prefered solution, except that I have to solve some problems with public key authentication not working from some of the machines I use.

I hope this post is helpful to some of you. If you have any other methods that you would like to mention, I'd be glad to hear.

-- Please correct me if I got my facts wrong.

--------- Your girlfriend rocks. I always disable password authentication on a new server before I enable sshd for the first time. I'm pretty certain I could safely give my root password out on IRC without much risk, although prudence says I'm not completely interested in testing that theory.

What sort of problems do you have with public key authentication? I've been using it for years from both Unix and Windows clients without problem. If you're feeling particularly 1337, GSSAPI authentication is pretty darn convenient and not all that hard to configure these days.
User Journal

Journal: Alternative to X - NX

Journal by greginnj
by La Gris (531858) Alter Relationship on Monday September 05, @12:38PM (#13483697)

You realy should have a look a FreeNX http://freshmeat.net/projects/freenx/>

FreeNX Server is the Free and GPL'd NX server implementation by Fabian Franz, based on NoMachine.com's NX technology. NoMachine have thankfully licensed the core of NX under the GPL (they provide a close-source commercial NX server product on top of that code, as well as professional support).

The NX protocol let you use remote X display while connected by low bandwidth lines. It require much less bandwith than raw X or X over compressed ssh.
User Journal

Journal: Firefox extensions

Journal by greginnj
well....at least we have extensions.... here's my list:

TextZoom [cosmicat.com] - because I'm blind as a bat
Adblock [mozdev.org] - use with Filterset.G from http://www.pierceive.com [pierceive.com]
Session Saver [extensionsmirror.nl] - saves tab sessions _when_ firefox crashes
Web Developer [mozdev.org] - lot of web dev options
IE View [mozdev.org] - click to view in IE
Target Alert [bolinfest.com] - let's me know what I'm clicking on
ForecastFox [mozdev.org] - show forecast
FindBar Switch [danakil.free.fr] - makes the find bar toogle hide/un-hide with CTRL+F
Download Statusbar [mozdev.org] - much better than the download window/popup
SpellBound [sourceforge.net] - because my spelling sux

"Hey Ivan, check your six." -- Sidewinder missile jacket patch, showing a Sidewinder driving up the tail of a Russian Su-27

Working...