I've only used httrack for automated website downloading. Make sure to be careful about how you set the depth and if the target site blocks you for being a bot then try changing the user agent or Javascript settings. I'm not sure how to circumvent a captcha code.
Back in the day when I still used Firefox, if it was something simple I'd use some addon to open the sequence of pages manually then automate (depending on laziness) a loop of CTRL S, ENTER, CTRL W until all the tabs were gone.
If you're after a wiki, I think you can export the whole thing through a special page, no need to crawl the site.
If the links follow a formula, curl or wget are the "basic" way to do it.
If you'd browse the site manually first anyways, someone
recommended this thing in software endorsements. I've never used it but it sounds nifty.