While you can store files in this fashion, it has significant tradeoffs:
- Most DBs are not optimized for giant quantities of binary data, and query performance often degrades dramatically as the table bloats, even with indexes. (SQL Server 2008, with the FILESTREAM column type, is the exception to the rule.)
- DB backup/replication becomes extremely slow.
- It's a lot easier to handle a corrupted drive with 2 million images -- just replace the disk on the RAID -- than a DB table that becomes corrupted.
- If you accidentally delete a dozen images on a filesystem, your operations guys can replace them pretty easily from a backup, and since the table index is tiny by comparison, it can be restored quickly. If you accidentally delete a dozen images in a giant database table, you have a long and painful wait to restore the DB from backup, paralyzing your entire system in the meantime.
These are just some of the drawbacks I can come up with off the top of my head. For tiny projects it may be worth storing files in this fashion, but if you're designing enterprise-grade software I would strongly recommend against it.