The problem with "converting" arbitrary Unicode to ASCII is that the meaning of a character is culture-dependent. For example, “ß” to a German-speaking person should be converted to "ss" while an English-speaker would probably convert it to “B”.
Add to that the fact that Unicode has multiple code points for the same glyphs.
The upshot is that the only way to do this is create a massive table with each Unicode character and the ASCII character you want to convert it to. You can take a shortcut by normalizing characters with accents to normalization form KD, but not all characters normalize to ASCII. In addition, Unicode does not define which parts of a glyph are "accents".
Here is a tiny excerpt from an app that does this:
switch (c)
{
case 'A':
case '\u00C0': // À LATIN CAPITAL LETTER A WITH GRAVE
case '\u00C1': // Á LATIN CAPITAL LETTER A WITH ACUTE
case '\u00C2': // Â LATIN CAPITAL LETTER A WITH CIRCUMFLEX
// and so on for about 20 lines...
return "A";
break;
case '\u00C6':// Æ LATIN CAPITAL LIGATURE AE
return "AE";
break;
// And so on for pages...
}