Yes you can do it yourself. It is just a matter of grabbing the sources of the page and parsing them the way you want.
There are various possibilities. A good combo is using python-requests (built on top of urllib2, it is urllib.request
in Python3) and BeautifulSoup4, which has its methods to select elements and also permits CSS selectors:
import requests
from BeautifulSoup4 import BeautifulSoup as bs
request = requests.get("http://foo.bar")
soup = bs(request.text)
some_elements = soup.find_all("div", class_="myCssClass")
Some will prefer xpath parsing or jquery-like pyquery, lxml or something else.
When the data you want is produced by some JavaScript, the above won't work. You either need python-ghost or Selenium. I prefer the latter combined with PhantomJS, much lighter and simpler to install, and easy to use:
from selenium import webdriver
client = webdriver.PhantomJS()
client.get("http://foo")
soup = bs(client.page_source)
I would advice to start your own solution. You'll understand Scrapy's benefits doing so.
ps: take a look at scrapely: https://github.com/scrapy/scrapely
pps: take a look at Portia, to start extracting information visually, without programming knowledge: https://github.com/scrapinghub/portia