ratcreature: Tech-Voodoo: RatCreature waves a dead chicken over a computer. (voodoo)
RatCreature ([personal profile] ratcreature) wrote2006-07-31 12:59 am
Entry tags:

a question about preventing link rot...

With each update of my recs page I run a link checker as part of the validation, using the W3C online tools. However a good number of sites, both archives and LJs, have robot exclusion rules, so I can't readily see whether the stories in question are really still there with the W3C link checker. Initially it wasn't a huge bother to check them manually, but by now it's over sixty links. Do any of you who maintain largish recs pages or other link collections have a good strategy for this or an autochecker tool working with a website that's not excluded as a robot?

I mean something like the bookmark editors in webbrowsers that go through links and then show you the broken ones, only something that would check based on a website. I assume html editors or website tools might have such a function, but I code my page simply in Emacs with the help of html and php editing modes for highlighting and such, and I have never needed any specialized programs. I tried one html editor that was bundled with my distro and had been installed automatically as part of the standard set-up (Quanta?), but it all seemed rather more complicated than I wanted to get into for just checking a bunch of links, i.e. I couldn't get the link checker function to work properly with my file at first try. I have no idea whether that was because of the file not being plain html but php mixed with html or something else, but I decided I'd really prefer something simpler.

So how do you check your links?

Post a comment in response:

This account has disabled anonymous posting.
(will be screened if not validated)
If you don't have an account you can create one now.
HTML doesn't work in the subject.
More info about formatting

If you are unable to use this captcha for any reason, please contact us by email at support@dreamwidth.org