In it's simplest form:
function crawl_page($url, $depth = 5) {
if($depth > 0) {
$html = file_get_contents($url);
preg_match_all('~<a.*?href="(.*?)".*?>~', $html, $matches);
foreach($matches[1] as $newurl) {
crawl_page($newurl, $depth - 1);
}
file_put_contents('results.txt', $newurl."\n\n".$html."\n\n", FILE_APPEND);
}
}
crawl_page('http://www.domain.com/index.php', 5);
That function will get contents from a page, then crawl all found links and save the contents to 'results.txt'. The functions accepts an second parameter, depth, which defines how long the links should be followed. Pass 1 there if you want to parse only links from the given page.