[php] How do I get the HTML code of a web page in PHP?

I want to retrieve the HTML code of a link (web page) in PHP. For example, if the link is

https://stackoverflow.com/questions/ask

then I want the HTML code of the page which is served. I want to retrieve this HTML code and store it in a PHP variable.

How can I do this?

This question is related to php html

The answer is


include_once('simple_html_dom.php');
$url="http://stackoverflow.com/questions/ask";
$html = file_get_html($url);

You can get the whole HTML code as an array (parsed form) using this code Download the 'simple_html_dom.php' file here http://sourceforge.net/projects/simplehtmldom/files/simple_html_dom.php/download


Also if you want to manipulate the retrieved page somehow, you might want to try some php DOM parser. I find PHP Simple HTML DOM Parser very easy to use.


I tried this code and it's working for me .

$html = file_get_contents('www.google.com');
$myVar = htmlspecialchars($html, ENT_QUOTES);
echo($myVar);

you can use the DomDocument method to get an individual HTML tag level variable too

$homepage = file_get_contents('https://www.example.com/');
$doc = new DOMDocument;
$doc->loadHTML($homepage);
$titles = $doc->getElementsByTagName('h3');
echo $titles->item(0)->nodeValue;

you could use file_get_contents if you are wanting to store the source as a variable however curl is a better practive.

$url = file_get_contents('http://example.com');
echo $url; 

this solution will display the webpage on your site. However curl is a better option.


You may want to check out the YQL libraries from Yahoo: http://developer.yahoo.com/yql

The task at hand is as simple as

select * from html where url = 'http://stackoverflow.com/questions/ask'

You can try this out in the console at: http://developer.yahoo.com/yql/console (requires login)

Also see Chris Heilmanns screencast for some nice ideas what more you can do: http://developer.yahoo.net/blogs/theater/archives/2009/04/screencast_collating_distributed_information.html


Here is two different, simple ways to get content from URL:

1) the first method

Enable Allow_url_include from your hosting (php.ini or somewhere)

<?php
$variableee = readfile("http://example.com/");
echo $variableee;
?> 

or

2)the second method

Enable php_curl, php_imap and php_openssl

<?php
// you can add anoother curl options too
// see here - http://php.net/manual/en/function.curl-setopt.php
function get_dataa($url) {
  $ch = curl_init();
  $timeout = 5;
  curl_setopt($ch, CURLOPT_URL, $url);
  curl_setopt($ch, CURLOPT_USERAGENT, "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0)");
  curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
  curl_setopt($ch, CURLOPT_SSL_VERIFYHOST,false);
  curl_setopt($ch, CURLOPT_SSL_VERIFYPEER,false);
  curl_setopt($ch, CURLOPT_MAXREDIRS, 10);
  curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1);
  curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, $timeout);
  $data = curl_exec($ch);
  curl_close($ch);
  return $data;
}

$variableee = get_dataa('http://example.com');
echo $variableee;
?>


Simple way: Use file_get_contents():

$page = file_get_contents('http://stackoverflow.com/questions/ask');

Please note that allow_url_fopen must be true in you php.ini to be able to use URL-aware fopen wrappers.

More advanced way: If you cannot change your PHP configuration, allow_url_fopen is false by default and if ext/curl is installed, use the cURL library to connect to the desired page.


$output = file("http://www.example.com"); didn't work until I enabled: allow_url_fopen, allow_url_include, and file_uploads in php.ini for PHP7