[php] How do you parse and process HTML/XML in PHP?

How can one parse HTML/XML and extract information from it?

This question is related to php xml parsing xml-parsing html-parsing

The answer is


Why you shouldn't and when you should use regular expressions?

First off, a common misnomer: Regexps are not for "parsing" HTML. Regexes can however "extract" data. Extracting is what they're made for. The major drawback of regex HTML extraction over proper SGML toolkits or baseline XML parsers are their syntactic effort and varying reliability.

Consider that making a somewhat dependable HTML extraction regex:

<a\s+class="?playbutton\d?[^>]+id="(\d+)".+?    <a\s+class="[\w\s]*title
[\w\s]*"[^>]+href="(http://[^">]+)"[^>]*>([^<>]+)</a>.+?

is way less readable than a simple phpQuery or QueryPath equivalent:

$div->find(".stationcool a")->attr("title");

There are however specific use cases where they can help.

  • Many DOM traversal frontends don't reveal HTML comments <!--, which however are sometimes the more useful anchors for extraction. In particular pseudo-HTML variations <$var> or SGML residues are easy to tame with regexps.
  • Oftentimes regular expressions can save post-processing. However HTML entities often require manual caretaking.
  • And lastly, for extremely simple tasks like extracting <img src= urls, they are in fact a probable tool. The speed advantage over SGML/XML parsers mostly just comes to play for these very basic extraction procedures.

It's sometimes even advisable to pre-extract a snippet of HTML using regular expressions /<!--CONTENT-->(.+?)<!--END-->/ and process the remainder using the simpler HTML parser frontends.

Note: I actually have this app, where I employ XML parsing and regular expressions alternatively. Just last week the PyQuery parsing broke, and the regex still worked. Yes weird, and I can't explain it myself. But so it happened.
So please don't vote real-world considerations down, just because it doesn't match the regex=evil meme. But let's also not vote this up too much. It's just a sidenote for this topic.


Third party alternatives to SimpleHtmlDom that use DOM instead of String Parsing: phpQuery, Zend_Dom, QueryPath and FluentDom.


One general approach I haven't seen mentioned here is to run HTML through Tidy, which can be set to spit out guaranteed-valid XHTML. Then you can use any old XML library on it.

But to your specific problem, you should take a look at this project: http://fivefilters.org/content-only/ -- it's a modified version of the Readability algorithm, which is designed to extract just the textual content (not headers and footers) from a page.


The best method for parse xml:

$xml='http://www.example.com/rss.xml';
$rss = simplexml_load_string($xml);
$i = 0;
foreach ($rss->channel->item as $feedItem) {
  $i++;
  echo $title=$feedItem->title;
  echo '<br>';
  echo $link=$feedItem->link;
  echo '<br>';
  if($feedItem->description !='') {
    $des=$feedItem->description;
  } else {
    $des='';
  }
  echo $des;
  echo '<br>';
  if($i>5) break;
}

I've created a library called HTML5DOMDocument that is freely available at https://github.com/ivopetkov/html5-dom-document-php

It supports query selectors too that I think will be extremely helpful in your case. Here is some example code:

$dom = new IvoPetkov\HTML5DOMDocument();
$dom->loadHTML('<!DOCTYPE html><html><body><h1>Hello</h1><div class="content">This is some text</div></body></html>');
echo $dom->querySelector('h1')->innerHTML;

This is commonly referred to as screen scraping, by the way. The library I have used for this is Simple HTML Dom Parser.


I recommend PHP Simple HTML DOM Parser.

It really has nice features, like:

foreach($html->find('img') as $element)
       echo $element->src . '<br>';

For 1a and 2: I would vote for the new Symfony Componet class DOMCrawler ( DomCrawler ). This class allows queries similar to CSS Selectors. Take a look at this presentation for real-world examples: news-of-the-symfony2-world.

The component is designed to work standalone and can be used without Symfony.

The only drawback is that it will only work with PHP 5.3 or newer.


Simple HTML DOM is a great open-source parser:

simplehtmldom.sourceforge

It treats DOM elements in an object-oriented way, and the new iteration has a lot of coverage for non-compliant code. There are also some great functions like you'd see in JavaScript, such as the "find" function, which will return all instances of elements of that tag name.

I've used this in a number of tools, testing it on many different types of web pages, and I think it works great.


Advanced Html Dom is a simple HTML DOM replacement that offers the same interface, but it's DOM-based which means none of the associated memory issues occur.

It also has full CSS support, including jQuery extensions.


phpQuery and QueryPath are extremely similar in replicating the fluent jQuery API. That's also why they're two of the easiest approaches to properly parse HTML in PHP.

Examples for QueryPath

Basically you first create a queryable DOM tree from an HTML string:

 $qp = qp("<html><body><h1>title</h1>..."); // or give filename or URL

The resulting object contains a complete tree representation of the HTML document. It can be traversed using DOM methods. But the common approach is to use CSS selectors like in jQuery:

 $qp->find("div.classname")->children()->...;

 foreach ($qp->find("p img") as $img) {
     print qp($img)->attr("src");
 }

Mostly you want to use simple #id and .class or DIV tag selectors for ->find(). But you can also use XPath statements, which sometimes are faster. Also typical jQuery methods like ->children() and ->text() and particularly ->attr() simplify extracting the right HTML snippets. (And already have their SGML entities decoded.)

 $qp->xpath("//div/p[1]");  // get first paragraph in a div

QueryPath also allows injecting new tags into the stream (->append), and later output and prettify an updated document (->writeHTML). It can not only parse malformed HTML, but also various XML dialects (with namespaces), and even extract data from HTML microformats (XFN, vCard).

 $qp->find("a[target=_blank]")->toggleClass("usability-blunder");

.

phpQuery or QueryPath?

Generally QueryPath is better suited for manipulation of documents. While phpQuery also implements some pseudo AJAX methods (just HTTP requests) to more closely resemble jQuery. It is said that phpQuery is often faster than QueryPath (because of fewer overall features).

For further information on the differences see this comparison on the wayback machine from tagbyte.org. (Original source went missing, so here's an internet archive link. Yes, you can still locate missing pages, people.)

And here's a comprehensive QueryPath introduction.

Advantages

  • Simplicity and Reliability
  • Simple to use alternatives ->find("a img, a object, div a")
  • Proper data unescaping (in comparison to regular expression grepping)

We have created quite a few crawlers for our needs before. At the end of the day, it is usually simple regular expressions that do the thing best. While libraries listed above are good for the reason they are created, if you know what you are looking for, regular expressions is a safer way to go, as you can handle also non-valid HTML/XHTML structures, which would fail, if loaded via most of the parsers.


JSON and array from XML in three lines:

$xml = simplexml_load_string($xml_string);
$json = json_encode($xml);
$array = json_decode($json,TRUE);

Ta da!


For HTML5, html5 lib has been abandoned for years now. The only HTML5 library I can find with a recent update and maintenance records is html5-php which was just brought to beta 1.0 a little over a week ago.


There are many ways to process HTML/XML DOM of which most have already been mentioned. Hence, I won't make any attempt to list those myself.

I merely want to add that I personally prefer using the DOM extension and why :

  • iit makes optimal use of the performance advantage of the underlying C code
  • it's OO PHP (and allows me to subclass it)
  • it's rather low level (which allows me to use it as a non-bloated foundation for more advanced behavior)
  • it provides access to every part of the DOM (unlike eg. SimpleXml, which ignores some of the lesser known XML features)
  • it has a syntax used for DOM crawling that's similar to the syntax used in native Javascript.

And while I miss the ability to use CSS selectors for DOMDocument, there is a rather simple and convenient way to add this feature: subclassing the DOMDocument and adding JS-like querySelectorAll and querySelector methods to your subclass.

For parsing the selectors, I recommend using the very minimalistic CssSelector component from the Symfony framework. This component just translates CSS selectors to XPath selectors, which can then be fed into a DOMXpath to retrieve the corresponding Nodelist.

You can then use this (still very low level) subclass as a foundation for more high level classes, intended to eg. parse very specific types of XML or add more jQuery-like behavior.

The code below comes straight out my DOM-Query library and uses the technique I described.

For HTML parsing :

namespace PowerTools;

use \Symfony\Component\CssSelector\CssSelector as CssSelector;

class DOM_Document extends \DOMDocument {
    public function __construct($data = false, $doctype = 'html', $encoding = 'UTF-8', $version = '1.0') {
        parent::__construct($version, $encoding);
        if ($doctype && $doctype === 'html') {
            @$this->loadHTML($data);
        } else {
            @$this->loadXML($data);
        }
    }

    public function querySelectorAll($selector, $contextnode = null) {
        if (isset($this->doctype->name) && $this->doctype->name == 'html') {
            CssSelector::enableHtmlExtension();
        } else {
            CssSelector::disableHtmlExtension();
        }
        $xpath = new \DOMXpath($this);
        return $xpath->query(CssSelector::toXPath($selector, 'descendant::'), $contextnode);
    }

    [...]

    public function loadHTMLFile($filename, $options = 0) {
        $this->loadHTML(file_get_contents($filename), $options);
    }

    public function loadHTML($source, $options = 0) {
        if ($source && $source != '') {
            $data = trim($source);
            $html5 = new HTML5(array('targetDocument' => $this, 'disableHtmlNsInDom' => true));
            $data_start = mb_substr($data, 0, 10);
            if (strpos($data_start, '<!DOCTYPE ') === 0 || strpos($data_start, '<html>') === 0) {
                $html5->loadHTML($data);
            } else {
                @$this->loadHTML('<!DOCTYPE html><html><head><meta charset="' . $encoding . '" /></head><body></body></html>');
                $t = $html5->loadHTMLFragment($data);
                $docbody = $this->getElementsByTagName('body')->item(0);
                while ($t->hasChildNodes()) {
                    $docbody->appendChild($t->firstChild);
                }
            }
        }
    }

    [...]
}

See also Parsing XML documents with CSS selectors by Symfony's creator Fabien Potencier on his decision to create the CssSelector component for Symfony and how to use it.


Simple HTML DOM is a great open-source parser:

simplehtmldom.sourceforge

It treats DOM elements in an object-oriented way, and the new iteration has a lot of coverage for non-compliant code. There are also some great functions like you'd see in JavaScript, such as the "find" function, which will return all instances of elements of that tag name.

I've used this in a number of tools, testing it on many different types of web pages, and I think it works great.


XML_HTMLSax is rather stable - even if it's not maintained any more. Another option could be to pipe you HTML through Html Tidy and then parse it with standard XML tools.


Yes you can use simple_html_dom for the purpose. However I have worked quite a lot with the simple_html_dom, particularly for web scraping and have found it to be too vulnerable. It does the basic job but I won't recommend it anyways.

I have never used curl for the purpose but what I have learned is that curl can do the job much more efficiently and is much more solid.

Kindly check out this link:scraping-websites-with-curl


I have written a general purpose XML parser that can easily handle GB files. It's based on XMLReader and it's very easy to use:

$source = new XmlExtractor("path/to/tag", "/path/to/file.xml");
foreach ($source as $tag) {
    echo $tag->field1;
    echo $tag->field2->subfield1;
}

Here's the github repo: XmlExtractor


Just use DOMDocument->loadHTML() and be done with it. libxml's HTML parsing algorithm is quite good and fast, and contrary to popular belief, does not choke on malformed HTML.


Another option you can try is QueryPath. It's inspired by jQuery, but on the server in PHP and used in Drupal.


QueryPath is good, but be careful of "tracking state" cause if you didn't realise what it means, it can mean you waste a lot of debugging time trying to find out what happened and why the code doesn't work.

What it means is that each call on the result set modifies the result set in the object, it's not chainable like in jquery where each link is a new set, you have a single set which is the results from your query and each function call modifies that single set.

in order to get jquery-like behaviour, you need to branch before you do a filter/modify like operation, that means it'll mirror what happens in jquery much more closely.

$results = qp("div p");
$forename = $results->find("input[name='forename']");

$results now contains the result set for input[name='forename'] NOT the original query "div p" this tripped me up a lot, what I found was that QueryPath tracks the filters and finds and everything which modifies your results and stores them in the object. you need to do this instead

$forename = $results->branch()->find("input[name='forname']")

then $results won't be modified and you can reuse the result set again and again, perhaps somebody with much more knowledge can clear this up a bit, but it's basically like this from what I've found.


There are several reasons to not parse HTML by regular expression. But, if you have total control of what HTML will be generated, then you can do with simple regular expression.

Above it's a function that parses HTML by regular expression. Note that this function is very sensitive and demands that the HTML obey certain rules, but it works very well in many scenarios. If you want a simple parser, and don't want to install libraries, give this a shot:

function array_combine_($keys, $values) {
    $result = array();
    foreach ($keys as $i => $k) {
        $result[$k][] = $values[$i];
    }
    array_walk($result, create_function('&$v', '$v = (count($v) == 1)? array_pop($v): $v;'));

    return $result;
}

function extract_data($str) {
    return (is_array($str))
        ? array_map('extract_data', $str)
        : ((!preg_match_all('#<([A-Za-z0-9_]*)[^>]*>(.*?)</\1>#s', $str, $matches))
            ? $str
            : array_map(('extract_data'), array_combine_($matches[1], $matches[2])));
}

print_r(extract_data(file_get_contents("http://www.google.com/")));

Just use DOMDocument->loadHTML() and be done with it. libxml's HTML parsing algorithm is quite good and fast, and contrary to popular belief, does not choke on malformed HTML.


XML_HTMLSax is rather stable - even if it's not maintained any more. Another option could be to pipe you HTML through Html Tidy and then parse it with standard XML tools.


The Symfony framework has bundles which can parse the HTML, and you can use CSS style to select the DOMs instead of using XPath.


I created a library named PHPPowertools/DOM-Query, which allows you to crawl HTML5 and XML documents just like you do with jQuery.

Under the hood, it uses symfony/DomCrawler for conversion of CSS selectors to XPath selectors. It always uses the same DomDocument, even when passing one object to another, to ensure decent performance.


Example use :

namespace PowerTools;

// Get file content
$htmlcode = file_get_contents('https://github.com');

// Define your DOMCrawler based on file string
$H = new DOM_Query($htmlcode);

// Define your DOMCrawler based on an existing DOM_Query instance
$H = new DOM_Query($H->select('body'));

// Passing a string (CSS selector)
$s = $H->select('div.foo');

// Passing an element object (DOM Element)
$s = $H->select($documentBody);

// Passing a DOM Query object
$s = $H->select( $H->select('p + p'));

// Select the body tag
$body = $H->select('body');

// Combine different classes as one selector to get all site blocks
$siteblocks = $body->select('.site-header, .masthead, .site-body, .site-footer');

// Nest your methods just like you would with jQuery
$siteblocks->select('button')->add('span')->addClass('icon icon-printer');

// Use a lambda function to set the text of all site blocks
$siteblocks->text(function( $i, $val) {
    return $i . " - " . $val->attr('class');
});

// Append the following HTML to all site blocks
$siteblocks->append('<div class="site-center"></div>');

// Use a descendant selector to select the site's footer
$sitefooter = $body->select('.site-footer > .site-center');

// Set some attributes for the site's footer
$sitefooter->attr(array('id' => 'aweeesome', 'data-val' => 'see'));

// Use a lambda function to set the attributes of all site blocks
$siteblocks->attr('data-val', function( $i, $val) {
    return $i . " - " . $val->attr('class') . " - photo by Kelly Clark";
});

// Select the parent of the site's footer
$sitefooterparent = $sitefooter->parent();

// Remove the class of all i-tags within the site's footer's parent
$sitefooterparent->select('i')->removeAttr('class');

// Wrap the site's footer within two nex selectors
$sitefooter->wrap('<section><div class="footer-wrapper"></div></section>');

[...]

Supported methods :


  1. Renamed 'select', for obvious reasons
  2. Renamed 'void', since 'empty' is a reserved word in PHP

NOTE :

The library also includes its own zero-configuration autoloader for PSR-0 compatible libraries. The example included should work out of the box without any additional configuration. Alternatively, you can use it with composer.


This sounds like a good task description of W3C XPath technology. It's easy to express queries like "return all href attributes in img tags that are nested in <foo><bar><baz> elements." Not being a PHP buff, I can't tell you in what form XPath may be available. If you can call an external program to process the HTML file you should be able to use a command line version of XPath. For a quick intro, see http://en.wikipedia.org/wiki/XPath.


With FluidXML you can query and iterate XML using XPath and CSS Selectors.

$doc = fluidxml('<html>...</html>');

$title = $doc->query('//head/title')[0]->nodeValue;

$doc->query('//body/p', 'div.active', '#bgId')
        ->each(function($i, $node) {
            // $node is a DOMNode.
            $tag   = $node->nodeName;
            $text  = $node->nodeValue;
            $class = $node->getAttribute('class');
        });

https://github.com/servo-php/fluidxml


You could try using something like HTML Tidy to cleanup any "broken" HTML and convert the HTML to XHTML, which you can then parse with a XML parser.


Try Simple HTML DOM Parser

  • A HTML DOM parser written in PHP 5+ that lets you manipulate HTML in a very easy way!
  • Require PHP 5+.
  • Supports invalid HTML.
  • Find tags on an HTML page with selectors just like jQuery.
  • Extract contents from HTML in a single line.
  • Download


Examples:

How to get HTML elements:

// Create DOM from URL or file
$html = file_get_html('http://www.example.com/');

// Find all images
foreach($html->find('img') as $element)
       echo $element->src . '<br>';

// Find all links
foreach($html->find('a') as $element)
       echo $element->href . '<br>';


How to modify HTML elements:

// Create DOM from string
$html = str_get_html('<div id="hello">Hello</div><div id="world">World</div>');

$html->find('div', 1)->class = 'bar';

$html->find('div[id=hello]', 0)->innertext = 'foo';

echo $html;


Extract content from HTML:

// Dump contents (without tags) from HTML
echo file_get_html('http://www.google.com/')->plaintext;


Scraping Slashdot:

// Create DOM from URL
$html = file_get_html('http://slashdot.org/');

// Find all article blocks
foreach($html->find('div.article') as $article) {
    $item['title']     = $article->find('div.title', 0)->plaintext;
    $item['intro']    = $article->find('div.intro', 0)->plaintext;
    $item['details'] = $article->find('div.details', 0)->plaintext;
    $articles[] = $item;
}

print_r($articles);

If you're familiar with jQuery selector, you can use ScarletsQuery for PHP

<pre><?php
include "ScarletsQuery.php";

// Load the HTML content and parse it
$html = file_get_contents('https://www.lipsum.com');
$dom = Scarlets\Library\MarkupLanguage::parseText($html);

// Select meta tag on the HTML header
$description = $dom->selector('head meta[name="description"]')[0];

// Get 'content' attribute value from meta tag
print_r($description->attr('content'));

$description = $dom->selector('#Content p');

// Get element array
print_r($description->view);

This library usually taking less than 1 second to process offline html.
It also accept invalid HTML or missing quote on tag attributes.


You could try using something like HTML Tidy to cleanup any "broken" HTML and convert the HTML to XHTML, which you can then parse with a XML parser.


Examples related to php

I am receiving warning in Facebook Application using PHP SDK Pass PDO prepared statement to variables Parse error: syntax error, unexpected [ Preg_match backtrack error Removing "http://" from a string How do I hide the PHP explode delimiter from submitted form results? Problems with installation of Google App Engine SDK for php in OS X Laravel 4 with Sentry 2 add user to a group on Registration php & mysql query not echoing in html with tags? How do I show a message in the foreach loop?

Examples related to xml

strange error in my Animation Drawable How do I POST XML data to a webservice with Postman? PHP XML Extension: Not installed How to add a Hint in spinner in XML Generating Request/Response XML from a WSDL Manifest Merger failed with multiple errors in Android Studio How to set menu to Toolbar in Android How to add colored border on cardview? Android: ScrollView vs NestedScrollView WARNING: Exception encountered during context initialization - cancelling refresh attempt

Examples related to parsing

Got a NumberFormatException while trying to parse a text file for objects Uncaught SyntaxError: Unexpected end of JSON input at JSON.parse (<anonymous>) Python/Json:Expecting property name enclosed in double quotes Correctly Parsing JSON in Swift 3 How to get response as String using retrofit without using GSON or any other library in android UIButton action in table view cell "Expected BEGIN_OBJECT but was STRING at line 1 column 1" How to convert an XML file to nice pandas dataframe? How to extract multiple JSON objects from one file? How to sum digits of an integer in java?

Examples related to xml-parsing

How to create XML file with specific structure in Java jQuery xml error ' No 'Access-Control-Allow-Origin' header is present on the requested resource.' SyntaxError of Non-ASCII character xml.LoadData - Data at the root level is invalid. Line 1, position 1 XML Error: Extra content at the end of the document How to use sed to extract substring How to fix Invalid byte 1 of 1-byte UTF-8 sequence The best node module for XML parsing Parsing XML with namespace in Python via 'ElementTree' oracle plsql: how to parse XML and insert into table

Examples related to html-parsing

PHP: HTML: send HTML select option attribute in POST Read a HTML file into a string variable in memory Parsing HTML using Python Parse an HTML string with JS HTML Text with tags to formatted text in an Excel cell How do I parse a HTML page with Node.js Regex select all text between tags How to extract string following a pattern with grep, regex or perl How to strip HTML tags from string in JavaScript? How do you parse and process HTML/XML in PHP?