[coding-style] What was the strangest coding standard rule that you were forced to follow?

When I asked this question I got almost always a definite yes you should have coding standards.

What was the strangest coding standard rule that you were ever forced to follow?

And by strangest I mean funniest, or worst, or just plain odd.

In each answer, please mention which language, what your team size was, and which ill effects it caused you and your team.

This question is related to coding-style

The answer is


Postfixing _ to member variables. e.g.

int numberofCycles_;

This was in C++ on an open source project with a couple of developers. The main side effect was not knowing that a variable had class scope until getting to the end of the name. Not something I had thought much about before, but clearly backwards.


There must be 165 unit tests (not necessarily automated) per 1000 lines of code. That works out at one test for roughly every 8 lines.

Needless to say, some of the lines of code are quite long, and functions return this pointers to allow chaining.


I am not allowed to use this-> to reference local variables in our c++ code...


Once worked on a project where underscores were banned. And I mean totally banned. So in a c# winforms app, whenever we added a new event handler (e.g. for a button) we'd have to rename the default method name from buttonName_Click() to something else, just to satisfy the ego of the guy that wrote the coding standards. To this day I don't know what he had against the humble underscore


Totally useless database naming conventions. Every table name has to start with a number. The numbers show which kind of data is in the table.

  • 0: data that is used everywhere
  • 1: data that is used by a certain module only
  • 2: lookup table
  • 3: calendar, chat and mail
  • 4: logging

This makes it hard to find a table if you only know the first letter of its name. Also - as this is a mssql database - we have to surround tablenames with square brackets everywhere.

-- doesn't work
select * from 0examples;

-- does work
select * from [0examples];

Every beginning and ending brace was required to have a comment:

public void HelloWorld(string name)
{

  if(name == "Joe")
  {
    Console.WriteLine("Hey, Joe!");
  } //if(name == "Joe")
  else
  {
    Console.WriteLine("Hello, " + name);
  } //if(name == "Joe")
} //public void HelloWorld(string name)

That's what led me to write my first Visual Studio plugin to automate that.


I was told that old code should be commented out rather than being removed; in case we needed to refer to the old code (yes, the code was in source control...). This doesn't seem that bad, until major changes are made. Then it becomes a nightmare, with entire sections deleted all over the code.


We were doing a C++ project and the team lead was a Pascal guy.

So we had a coding standard include file to redefine all that pesky C and C++ syntax:

#define BEGIN {
#define END }

but wait there's more!

#define ENDIF }
#define CASE switch

etc. It's hard to remember after all this time.

This took what would have been perfectly readable C++ code and made it illegible to anyone except the team lead.

We also had to use reverse Hungarian notation, i.e.

MyClass *class_pt  // pt = pointer to type

UINT32 maxHops_u   // u = uint32

although oddly I grew to like this.


Once worked on a project where underscores were banned. And I mean totally banned. So in a c# winforms app, whenever we added a new event handler (e.g. for a button) we'd have to rename the default method name from buttonName_Click() to something else, just to satisfy the ego of the guy that wrote the coding standards. To this day I don't know what he had against the humble underscore


reverse indentation. For example:

    for(int i = 0; i < 10; i++)
        {
myFunc();
        }

and:

    if(something)
        {
// do A
        }
    else
        {
// do B
    }

I ran into two rules that I really hated on a C job a few years ago:

  1. "One module per file," where "module" was defined as a C function.

  2. Function-local variables allowed only at the top of the function, so this sort of thing was illegal:

if (test)
{
   int i;
   ...
}

Applying s_ to variables and methods which were deemed "safety critical" for software that was part of a control system. Couple this with the other rule about putting m_ on the front of member variables and you'd get something ridiculous like "s_m_blah()", which is darn annoying to write and not very readable in my opinion. In the end some 'safety expert' was supposed to gain insight by looking at the code and determining something from it by using those "s_" - in practice, they didn't know c++ too well so they couldn't do much other than make reports on the number of identifiers that we'd marked as 'safety critical'. Utter nonsense...


The team size was about a dozen. For C# methods we had to put a huge XML formatted function before every function. I don't remember the format exactly but it involved XML tags nested about three to five levels deep. Here's a sketch from memory of the comment.

/// <comment>
/// </comment>
/// <table>
///    <thead>
///       <tcolumns>
///          <column>Date</column>
///          <column>Modified By</column>
///          <column>Comment</column>
///       </tcolumns>
///    </thead>
///    <rows>
///       <row>
///          <column>10/10/2006</column>
///          <column>Fred</column>
///          <column>Created function</column>
///       </row>
///    </rows>
/// <parameters>

I've got to stop there....

The downsides were many.

  • Files were made up mostly of comments.
  • We were not using our version control system for tracking changes to files.
  • Writing many small functions hurt readability.
  • Lots of scrolling.
  • Some people did not update the comments.

I used a code snippet (Emacs YAS) to add this code to my methods.


I implemented and modified an open-source asp classic shopping cart (that is mostly a long string of dailyWTF candidates,) that started every variable with a lower case p. As in, pTax_Amount or pFirst_Name.

There was no explanation for this, tho I read somewhere on one of their forums it was to avoid using reserved words like State - you'd have pState instead. They also append temp to things kinda randomly. like rsTemp, and connTemp. As opposed to the permanent record sets and database connections, I guess.


Not quite a coding standard, but in 1998 I worked for a company where C++ was banned, in favour of C. This was because OO was considered too complex for the software engineers to grasp.

In our C code we were required to prefix all semi-colons with a space

int someInt = 5 ;

I could never find out a reason for this, but after a while it did grow on me.


I worked at a place that had a merger between 2 companies. The 'dominant' one had a major server written in K&R C (i.e. pre-ANSI). They forced the Java teams (from both offices -- probably 20 devs total) to use this format, which gleefully ignored the 2 pillars of the "brace debate" and goes straight to crazy:

if ( x == y ) 
    {
    System.out.println("this is painful");
    x = 0;
    y++;
    }

We were doing a C++ project and the team lead was a Pascal guy.

So we had a coding standard include file to redefine all that pesky C and C++ syntax:

#define BEGIN {
#define END }

but wait there's more!

#define ENDIF }
#define CASE switch

etc. It's hard to remember after all this time.

This took what would have been perfectly readable C++ code and made it illegible to anyone except the team lead.

We also had to use reverse Hungarian notation, i.e.

MyClass *class_pt  // pt = pointer to type

UINT32 maxHops_u   // u = uint32

although oddly I grew to like this.


no single character variable names - even for a simple iterator like i. Had to use ii or something. I thought this was stupid.

Another one - perhaps the craziest of all, but maybe not a coding standard...

no STL allowed. and this was in 2007/2008. I left there soon after I found out about that nonsense. Apparently some idiots thought that there was no "standard" (As in 15 years ago...) I guess they missed the memo about stl being in the C++ standard...

Use of the stupid COM HRESULTs as return types for just about ALL methods - even if they are not COM. It was ludicrous. So now instead of returning some enumerated type or a useful value that indicates a result, etc, we had to look up what S_OK or E_FAIL or whatever meant in the context of each of the methods. Again, I left there shortly after that.


The worst I've experienced was to do with code inspections. For some reason even though we had and used the diff tool of our vcs to see what had changed, when you wanted your code inspected you had to surround your changes in a file/function with some comment blocks like so:

/*********...80charswide...***
 * START INSPECT
 */

 some changed code...

 /*
  * END INSPECT
  *********...80charswide...****/

After the inspection you'd have to go back and remove all those comment blocks before committing. ugh.


In Java, when contracting somewhere that shall remain nameless, Interfaces were banned. The logic? The guy in charge couldn't find implementing classes with Eclipse...

Also banned - anonymous inner classes, on the grounds that the guy in charge didn't know what they were. Which made implementing a Swing GUI all kinds of fun.


Back in my COBOL days, we had to use three asterisks for comments (COBOL requires only one asterisk in column 7). We even had a pre-compiler that checked for this, and wouldn't compile your program if you used anything but three asterisks.


Doing all database queries via stored procedures in Sql Server 2000. From complex multi-table queries to simple ones like:

select id, name from people

The arguments in favor of procedures were:

  • Performance
  • Security
  • Maintainability

I know that the procedure topic is quite controversial, so feel free to score my answer negatively ;)


I ran into two rules that I really hated on a C job a few years ago:

  1. "One module per file," where "module" was defined as a C function.

  2. Function-local variables allowed only at the top of the function, so this sort of thing was illegal:

if (test)
{
   int i;
   ...
}

A buddy of mine encountered this rule while working at a government job. The use of ++ (pre or post) was completely banned. The reason: Different compilers might interpret it differently.


In C++, we had to write explicitly everything that the compiler is supposed to write for us (default constructor, destructor, copy constructor, copy assignment operator) for every class. Looks like whoever wrote the standards was not very confident on the language.


Doing all database queries via stored procedures in Sql Server 2000. From complex multi-table queries to simple ones like:

select id, name from people

The arguments in favor of procedures were:

  • Performance
  • Security
  • Maintainability

I know that the procedure topic is quite controversial, so feel free to score my answer negatively ;)


Once I had to do a little DLL out of my team and when it was done I had to redo the job because I shouldn't have had used "else" in the code. When I asked why I was instructed not to ask why, but the leader of the other team just "didn't get the else stuff".


If I remember correctly the delphi IDE did a default indent of two spaces. Most of the legacy code for the company had three spaces and was written by the VP IT and the CEO. One day, all the programmers were talking about what we should do to make our lives easier and a contractor who knew Delphi pretty well said, "Hey the ide defaults to two spaces does anyone have a problem with us doing this going forward for new code?" All of us looked at each other, and pretty much thought it was a no brainer and said that we agreed.

Two days later the VP and CEO found out we were going to make such a dangerous change that could "cause problems" and instructed us that we would be using three indents for everything until the two of them could accurately evaluate the impact of such a change. Now I am all for following standards, but these are the same people who thought oo programming was creating an object with one function that had all of the logic necessary to perform an action, and that source control was moving the code files to a different directory.


I implemented and modified an open-source asp classic shopping cart (that is mostly a long string of dailyWTF candidates,) that started every variable with a lower case p. As in, pTax_Amount or pFirst_Name.

There was no explanation for this, tho I read somewhere on one of their forums it was to avoid using reserved words like State - you'd have pState instead. They also append temp to things kinda randomly. like rsTemp, and connTemp. As opposed to the permanent record sets and database connections, I guess.


At a major UK bank I was brought in to act as a design authority on a new .NET system.

Their rules state that the database tables had to be a maximum of 8 characters long, with the project code (a 5 digit code) as the prefix.

They were enforcing old DB2 rules onto Windows projects sigh


inserting line breaks
(//--------------------------------------------------------------------------------)
between methods in a c# project.


The strangest was that type qualified variable naming must be used in Java, and the types where those of the columns from the database. So a java.sql.ResultSet had to be called tblClient etc.


Totally useless database naming conventions. Every table name has to start with a number. The numbers show which kind of data is in the table.

  • 0: data that is used everywhere
  • 1: data that is used by a certain module only
  • 2: lookup table
  • 3: calendar, chat and mail
  • 4: logging

This makes it hard to find a table if you only know the first letter of its name. Also - as this is a mssql database - we have to surround tablenames with square brackets everywhere.

-- doesn't work
select * from 0examples;

-- does work
select * from [0examples];

At my first job, all C programs, no matter how simple or complex, had only four functions. You had the main, which called the other three functions in turn. I can't remember their names, but they were something along the lines of begin(), middle(), and end(). begin() opened files and database connections, end() closed them, and middle() did everything else. Needless to say, middle() was a very long function.

And just to make things even better, all variables had to be global.

One of my proudest memories of that job is having been part of the general revolt that led to the destruction of those standards.


At a former job:

  • "Normal" tables begin with T_
  • "System" tables (usually lookups) begin with TS_ (except when they don't because somebody didn't feel like it that day)
  • Cross-reference tables begin with TSX_
  • All field names begin with F_

Yes, that's right. All of the fields, in every single table. So that we can tell it's a field.


Prefix tables with dbo_

Yes, as in dbo.dbo_tablename.


As I always worked self-employed/freelancer/project leader, I never got into someone's standards, all standards are my decisions. But, I recently found a fun piece of "coding standards document" back when I was 15:

All functions must be named "ProjectName_FunctionName".

Well, procedural PHP, anyone? Those weren't times of hard PHP OOP yet, but still. If I wanted to use code from one project to another, I would have to rewrite all references, etc.

I could have used something like "package_FunctionName".


Postfixing _ to member variables. e.g.

int numberofCycles_;

This was in C++ on an open source project with a couple of developers. The main side effect was not knowing that a variable had class scope until getting to the end of the name. Not something I had thought much about before, but clearly backwards.


We had to sort all the functions in classes alphabetically, to make them "easier to find". Never mind the ide had a drop down. That was too many clicks.

(same tech lead wrote an app to remove all comments from our source code).


I implemented and modified an open-source asp classic shopping cart (that is mostly a long string of dailyWTF candidates,) that started every variable with a lower case p. As in, pTax_Amount or pFirst_Name.

There was no explanation for this, tho I read somewhere on one of their forums it was to avoid using reserved words like State - you'd have pState instead. They also append temp to things kinda randomly. like rsTemp, and connTemp. As opposed to the permanent record sets and database connections, I guess.


It was a coding standard I did not follow myself ( got in trouble for other things, but never that ). We had three 19" monitors, so we could have two editors open to full screen and still have access to the desktop. Everyone else did not use comments, but used meaningful names. Extremely long meaningful names. The longest I remember was in the 80 character range. The average was around 40~50.

Guess what, they didn't accurately describe the whole thing.


Half of the team favored four-space indentation; the other half favored two-space indentation.

As you can guess, the coding standard mandated three, so as to "offend all equally" (a direct quote).


In 1987 or so, I took a job with a company that hired me because I was one of a small handful of people who knew how to use Revelation. Revelation, if you've never heard of it, was essentially a PC-based implementation of the Pick operating system - which, if you've never heard of it, got its name from its inventor, the fabulously-named Dick Pick. Much can be said about the Pick OS, most of it good. A number of supermini vendors (Prime and MIPS, at least) used Pick, or their own custom implementations of it.

This company was a Prime shop, and for their in-house systems they used Information. (No, that was really its name: it was Prime's implementation of Pick.) They had a contract with the state to build a PC-based system, and had put about a year into their Revelation project before the guy doing all the work, who was also their MIS director, decided he couldn't do both jobs anymore and hired me.

At any rate, he'd established a number of coding standards for their Prime-based software, many of which derived from two basic conditions: 1) the use of 80-column dumb terminals, and 2) the fact that since Prime didn't have a visual editor, he'd written his own. Because of the magic portability of Pick code, he'd brought his editor down into Revelation, and had built the entire project on the PC using it.

Revelation, of course, being PC-based, had a perfectly good full-screen editor, and didn't object when you went past column 80. However, for the first several months I was there, he insisted that I use his editor and his standards.

So, the first standard was that every line of code had to be commented. Every line. No exceptions. His rationale for that was that even if your comment said exactly what you had just written in the code, having to comment it meant you at least thought about the line twice. Also, as he cheerfully pointed out, he'd added a command to the editor that formatted each line of code so that you could put an end-of-line comment.

Oh, yes. When you commented every line of code, it was with end-of-line comments. In short, the first 64 characters of each line were for code, then there was a semicolon, and then you had 15 characters to describe what your 64 characters did. In short, we were using an assembly language convention to format our Pick/Basic code. This led to things that looked like this:

EVENT.LIST[DATE.INDEX][-1] = _         ;ADD THE MOST RECENT EVENT
   EVENTS[LEN(EVENTS)]                 ;TO THE END OF EVENT LIST

(Actually, after 20 years I have finally forgotten R/Basic's line-continuation syntax, so it may have looked different. But you get the idea.)

Additionally, whenever you had to insert multiline comments, the rule was that you use a flower box:

************************************************************************
**  IN CASE YOU NEVER HEARD OF ONE, OR COULDN'T GUESS FROM ITS NAME,  **
**  THIS IS A FLOWER BOX.                                             **
************************************************************************

Yes, those closing asterisks on each line were required. After all, if you used his editor, it was just a simple editor command to insert a flower box.

Getting him to relent and let me use Revelation's built-in editor was quite a battle. At first he was insistent, simply because those were the rules. When I objected that a) I already knew the Revelation editor b) it was substantially more functional than his editor, c) other Revelation developers would have the same perspective, he retorted that if I didn't train on his editor I wouldn't ever be able to work on the Prime codebase, which, as we both knew, was not going to happen as long as hell remained unfrozen over. Finally he gave in.

But the coding standards were the last to go. The flower-box comments in particular were a stupid waste of time, and he fought me tooth and nail on them, saying that if I'd just use the right editor maintaining them would be perfectly easy. (The whole thing got pretty passive-aggressive.) Finally I quietly gave in, and from then on all of the code I brought to code reviews had his precious flower-box comments.

One day, several months into the job, when I'd pretty much proven myself more than competent (especially in comparison with the remarkable parade of other coders that passed through that office while I worked there), he was looking over my shoulder as I worked, and he noticed I wasn't using flower-box comments. Oh, I said, I wrote a source-code formatter that converts my comments into your style when I print them out. It's easier than maintaining them in the editor. He opened his mouth, thought for a moment, closed it, went away, and we never talked about coding standards again. Both of our jobs got easier after that.


My old boss insisted that we use constants instead of enums but never gave a reason and in all the scenarios these were used an enum made more sense.

The better one though was insisting that all table names be singular and then making the classes in code singular as well. But not only did they represent the object, such as a user or group, they also represented the table and contained all of the CRUD for that table and numerous other actions. But wait, there’s more! They also had to contain a publicly visible name/value collection so that way you could get the properties with an indexer, by column name, just in case you added a new column but didn't want to add in a new property. There were a bunch of other "must do's" that not only didn't make sense, but put a big performance hit on the code as well. I could try to point them all out but the code speaks for itself and sadly this is almost an exact copy of the User class I just pulled out of an old archive folder:

public class Record
{
    private string tablename;
    private Database database;

    public NameValueCollection Fields;

    public Record(string TableName) : this(TableName, null) { }
    public Record(string TableName, Database db)
    {
        tablename = TableName;
        database = db;
    }

    public string TableName
    {
        get { return tablename; }
    }

    public ulong ID
    {
        get { return GetULong("ID"); }
        set { Fields["ID"] = value.ToString(); }

    }

    public virtual ulong GetULong(string field)
    {
        try { return ulong.Parse(this[field]); }
        catch(Exception) { return 0; }
    }

    public virtual bool Change()
    {
        InitializeDB(); // opens the connection
        // loop over the Fields object and build an update query
        DisposeDB(); // closes the connection
        // return the status
    }

    public virtual bool Create()
    {
        // works almost just like the Change method
    }

    public virtual bool Read()
    {
        InitializeDB(); // opens the connection
        // use the value of the ID property to build a select query
        // populate the Fields collection with the columns/values if the read was successful
        DisposeDB(); // closes the connection
        // return the status    
    }
}

public class User
{
    public User() : base("User") { }
    public User(Database db) : base("User", db) { }

    public string Username
    {
        get { return Fields["Username"]; }
        set
        {
            Fields["Username"] = value.ToString(); // yes, there really is a redundant ToString call
        }
    }
}

sorry if this double posts, first time around I might not of been human or maybe the site just has a limit to how bad code can be to be posted


The one that got me was similar to the other poster's "tbl" prefix for SQL table names.

In this case, the prefix for all stored procedures was to be "sp_" despite the fact that "sp_" is a prefix used by Microsoft for system-level stored procedures in SQL Server. Well, they had their standards from an old, non-MS database and weren't about to change just because their standard might cause a stored procedure to collide with a system stored procedure and produce unpredictable results. No, that just wouldn't be proper.


Several WTF's in one VB6 shop (I'm not proud, I was hungry and needed to eat) back in 2002 - 2004.

The most annoying IMHO, was setting all object references to nothing at the end of the sub/function. This was to "help" the compiler reference count. It didn't matter how many tests I performed for the TA to prove it wasn't necessary, Oh no, it still had to be done, even though he had absoutely no evidence to back him up what so ever. Eventually I gave up and about a year later found an article explaining why it was pants. I bring this to the TA thinking "Got the fecker!". He goes "Yeah, I've known about that for years, but if you start changing the standard the sheep " meaning other developers, the people he worked with everyday "will screw it up". Gob sh1te.

Others in the same shop.

  • Never delete code, always comment it out (even though we were using source control).
  • Prefixes on table names that were meaningless when I got there, but had to be enforced on new tables.
  • Prefixing all objects with o_ (lo_ for procedure level references, mo_ for module, go_ for global). Absoutely pointless in a project where every other variable was an object reference.

Mostly I was writing c++ there (only c++ developer, so made own standards, and enforced with rigor!) with occasional vb, otherwise I wouldn't have lasted.


We have a no code past the 80th character column that is controversial in our C++ development team. Liked and code review enforced by some; Despised by others.

Also, we have a very controversial C++ throw(), throw(...) specification standard. Religiously used by some and demonized by others. Both camps cite discussions and experts to enforce their respective positions.


Almost any kind of hungarian notation.

The problem with hungarian notation is that it is very often misunderstood. The original idea was to prefix the variable so that the meaning was clear. For example:

int appCount = 0; // Number of apples.
int pearCount = 0; // Number of pears.

But most people use it to determine the type.

int iAppleCount = 0; // Number of apples.
int iPearCount = 0;  // Number of pears.

This is confusing, because although both numbers are integers, everybody knows, you can't compare apples with pears.


The team size was about a dozen. For C# methods we had to put a huge XML formatted function before every function. I don't remember the format exactly but it involved XML tags nested about three to five levels deep. Here's a sketch from memory of the comment.

/// <comment>
/// </comment>
/// <table>
///    <thead>
///       <tcolumns>
///          <column>Date</column>
///          <column>Modified By</column>
///          <column>Comment</column>
///       </tcolumns>
///    </thead>
///    <rows>
///       <row>
///          <column>10/10/2006</column>
///          <column>Fred</column>
///          <column>Created function</column>
///       </row>
///    </rows>
/// <parameters>

I've got to stop there....

The downsides were many.

  • Files were made up mostly of comments.
  • We were not using our version control system for tracking changes to files.
  • Writing many small functions hurt readability.
  • Lots of scrolling.
  • Some people did not update the comments.

I used a code snippet (Emacs YAS) to add this code to my methods.


Prefix tables with dbo_

Yes, as in dbo.dbo_tablename.


The worst I've experienced was to do with code inspections. For some reason even though we had and used the diff tool of our vcs to see what had changed, when you wanted your code inspected you had to surround your changes in a file/function with some comment blocks like so:

/*********...80charswide...***
 * START INSPECT
 */

 some changed code...

 /*
  * END INSPECT
  *********...80charswide...****/

After the inspection you'd have to go back and remove all those comment blocks before committing. ugh.


Capitalizing Acronyms

DO capitalize both characters of two-character acronyms except the first word of a camel-cased identifier.

System.IO
public void StartIO(Stream ioStream)

DO capitalize only the first character of acronyms with three or more characters except the first word of a camel-cased identifier.

System.Xml
public void ProcessHtmlTag(string htmlTag)

DO NOT capitalize any of the characters of any acronyms, whatever their length, at the beginning of a camel-cased identifier.


If I remember correctly the delphi IDE did a default indent of two spaces. Most of the legacy code for the company had three spaces and was written by the VP IT and the CEO. One day, all the programmers were talking about what we should do to make our lives easier and a contractor who knew Delphi pretty well said, "Hey the ide defaults to two spaces does anyone have a problem with us doing this going forward for new code?" All of us looked at each other, and pretty much thought it was a no brainer and said that we agreed.

Two days later the VP and CEO found out we were going to make such a dangerous change that could "cause problems" and instructed us that we would be using three indents for everything until the two of them could accurately evaluate the impact of such a change. Now I am all for following standards, but these are the same people who thought oo programming was creating an object with one function that had all of the logic necessary to perform an action, and that source control was moving the code files to a different directory.


Forbidden:

while (true) {

Allowed:

for (;;) {

What drives me nuts is people suffixing the ID field of a table with the name of the table. What the hell is wrong with just ID? You're going to have to alias it anyway... for the love of all that is sacred!

Imagine what your SQL statements look like when you've got id fields called IDSEWEBLASTCUSTOMERACTION and IDSEEVENTLOGGER.


I implemented and modified an open-source asp classic shopping cart (that is mostly a long string of dailyWTF candidates,) that started every variable with a lower case p. As in, pTax_Amount or pFirst_Name.

There was no explanation for this, tho I read somewhere on one of their forums it was to avoid using reserved words like State - you'd have pState instead. They also append temp to things kinda randomly. like rsTemp, and connTemp. As opposed to the permanent record sets and database connections, I guess.


Being forced to have only 1 return statement at the end of a method and making the code fall down to that.

Also not being able to re-use case statements in a switch and let it drop through; I had to write a convoluted script that did a sort of loop of the switch to handle both cases in the right order.

Lastly, when I started using C, I found it very odd to declare my variables at the top of a method and absolutely hated it. I'd spent a good couple of years in C++ and just declared them wherever I wanted; Unless for optimisation reasons I now declare all method variables at the top of a method with details of what they all do - makes maintenance A LOT easier.


Strangest was "this must be coded in C++". Presumably I'm being hired for my expertise. If my expert opinion says another language would do the job better, then that other language should be the one used. Telling me which tool I should use is about the same as telling an automobile mechanic that he's only allowed to use metric wrenches. And only wrenches.


When using SQL Server, which has such big limits on table name length that I've never personally bumped into them, we were forced to use the naming convention from the older mainframe system, even though the new system never interacted with the mainframe database.

Because of the tiny limit on the table names, the convention was to give all the tables codenames, rather than meaningful descriptions.

So, on a system that could quite happily have had the "customer" table called "ThisIsTheCustomerTable", instead it was called "TBRC03AA". And the next table was called "TBRC03AB", and the next one called "TBRC03AC", and so on.

That made the SQL really easy to understand, especially a month after you'd written it.


The very strangest one I had, and one which took me quite some time to overthrow, was when the owner of our company demanded that our new product be IE only. If it could work on FireFox, that was OK, but it had to be IE only.

This might not sound too strange, except for one little flaw. All of the software was for a bespoke server software package, running on Linux, and all client boxes that our customer was buying were Linux. Short of trying to figure out how to get Wine (in those days, very unreliable) up and running on all of these boxes and seeing if we could get IE running and training their admins how to debug Wine problems, it simply wasn't possible to meet the owner's request. The problem was that he was doing the Web design and simply didn't know how to make Web sites compliant with FireFox.

It probably won't shock you to know that that our company went bankrupt.


I once worked under the tyranny of the Mighty VB King.

The VB King was the pure master of MS Excel and VBA, as well as databases (Hence his surname : He played with Excel while the developers worked with compilers, and challenging him on databases could have detrimental effects on your career...).

Of course, his immense skills gave him an unique vision of development problems and project management solutions: While not exactly coding standards in the strictest sense, the VB King regularly had new ideas about "coding standards" and "best practices" he tried (and oftentimes succeeded) to impose on us. For example:

  • All C/C++ arrays shall start at index 1, instead of 0. Indeed, the use of 0 as first index of an array is obsolete, and has been superseded by Visual Basic 6's insightful array index management.

  • All functions shall return an error code: There are no exceptions in VB6, so why would we need them at all? (i.e. in C++)

  • Since "All functions shall return an error code" is not practical for functions returning meaningful types, all functions shall have an error code as first [in/out] parameter.

  • All our code will check the error codes (this led to the worst case of VBScript if-indentation I ever saw in my career... Of course, as the "else" clauses were never handled, no error was actually found until too late).

  • Since we're working with C++/COM, starting this very day, we will code all our DOM utility functions in Visual Basic.

  • ASP 115 errors are evil. For this reason, we will use On Error Resume Next in our VBScript/ASP code to avoid them.

  • XSL-T is an object oriented language. Use inheritance to resolve your problems (dumb surprise almost broke my jaw open this one day).

  • Exceptions are not used, and thus should be removed. For this reason, we will uncheck the checkbox asking for destructor call in case of exception unwinding (it took days for an expert to find the cause of all those memory leaks, and he almost went berserk when he found out they had willingly ignored (and hidden) his technical note about checking the option again, sent handfuls of weeks before).

  • catch all exceptions in the COM interface of our COM modules, and dispose them silently (this way, instead of crashing, a module would only appear to be faster... Shiny!... As we used the über error handling described above, it even took us some time to understand what was really happening... You can't have both speed and correct results, can you?).

  • Starting today, our code base will split into four branches. We will manage their synchronization and integrate all bug corrections/evolutions by hand.

All but the C/C++ arrays, VB DOM utility functions and XSL-T as OOP language were implemented despite our protests. Of course, over the time, some were discovered, ahem, broken, and abandoned altogether.

Of course, the VB King credibility never suffered for that: Among the higher management, he remained a "top gun" technical expert...

This produced some amusing side effects, as you can see by following the link What is the best comment in source code you have ever encountered?


Back in my C++ days we were not allowed to use ==,>=, <=,&&, etc. there were macros for this ...

if (bob EQ 7 AND alice LEQ 10)
{
   // blah
}

this was obviously to deal with the "old accidental assignment in conditional bug", however we also had the rule "put constants before variables", so

if (NULL EQ ptr); //ok
if (ptr EQ NULL); //not ok

Just remembered, the simplest coding standard I ever heard was "Write code as if the next maintainer is a vicious psychopath who knows where you live."


Once worked on a project where underscores were banned. And I mean totally banned. So in a c# winforms app, whenever we added a new event handler (e.g. for a button) we'd have to rename the default method name from buttonName_Click() to something else, just to satisfy the ego of the guy that wrote the coding standards. To this day I don't know what he had against the humble underscore


Using generic numbered identifier names

At my current work we have two rules which are really mean:

Rule 1: Every time we create a new field in a database table we have to add additional reserve fields for future use. These reserve fields are numbered (because no one knows which data they will hold some day) The next time we need a new field we first look for an unused reserve field.

So we end up with with customer.reserve_field_14 containing the e-mail address of the customer.

At one day our boss thought about introducing reserve tables, but fortunatly we could convince him not to do it.

Rule 2: One of our products is written in VB6 and VB6 has a limit of the total count of different identifier names and since the code is very large, we constantly run into this limit. As a "solution" all local variable names are numbered:

  • Lvarlong1
  • Lvarlong2
  • Lvarstr1
  • ...

Although that effectively circumvents the identifier limit, these two rules combined lead to beautiful code like this:

...

If Lvarbool1 Then
  Lvarbool2 = True
End If

If Lvarbool2 Or Lvarstr1 <> Lvarstr5 Then
  db.Execute("DELETE FROM customer WHERE " _ 
      & "reserve_field_12 = '" & Lvarstr1 & "'")
End If

...

You can imagine how hard it is to fix old or someone else's code...

Latest update: Now we are also using "reserve procedures" for private members:

Private Sub LSub1(Lvarlong1 As Long, Lvarstr1 As String)
  If Lvarlong1 >= 0 Then 
    Lvarbool1 = LFunc1(Lvarstr1)
  Else
    Lvarbool1 = LFunc6()
  End If
  If Lvarbool1 Then
    LSub4 Lvarstr1
  End If
End Sub

EDIT: It seems that this code pattern is becoming more and more popular. See this The Daily WTF post to learn more: Astigmatism :)


Almost any kind of hungarian notation.

The problem with hungarian notation is that it is very often misunderstood. The original idea was to prefix the variable so that the meaning was clear. For example:

int appCount = 0; // Number of apples.
int pearCount = 0; // Number of pears.

But most people use it to determine the type.

int iAppleCount = 0; // Number of apples.
int iPearCount = 0;  // Number of pears.

This is confusing, because although both numbers are integers, everybody knows, you can't compare apples with pears.


There must be 165 unit tests (not necessarily automated) per 1000 lines of code. That works out at one test for roughly every 8 lines.

Needless to say, some of the lines of code are quite long, and functions return this pointers to allow chaining.


Anything having to do with formatting (especially place of '{' and other block character) is always a pain to enforce.

Even with an automatic format at each source file checking, you can not be sure every developer will ever always use the same formatter, with the same formatting set of rules...

And then you have to merge those files back to trunk. And you commit suicide ;)


Use _ or m_ in front of global variable when you can simply use the keyword this. when you need to access global variable...


no single character variable names - even for a simple iterator like i. Had to use ii or something. I thought this was stupid.

Another one - perhaps the craziest of all, but maybe not a coding standard...

no STL allowed. and this was in 2007/2008. I left there soon after I found out about that nonsense. Apparently some idiots thought that there was no "standard" (As in 15 years ago...) I guess they missed the memo about stl being in the C++ standard...

Use of the stupid COM HRESULTs as return types for just about ALL methods - even if they are not COM. It was ludicrous. So now instead of returning some enumerated type or a useful value that indicates a result, etc, we had to look up what S_OK or E_FAIL or whatever meant in the context of each of the methods. Again, I left there shortly after that.


I am not allowed to use this-> to reference local variables in our c++ code...


Not quite a coding standard, but in 1998 I worked for a company where C++ was banned, in favour of C. This was because OO was considered too complex for the software engineers to grasp.

In our C code we were required to prefix all semi-colons with a space

int someInt = 5 ;

I could never find out a reason for this, but after a while it did grow on me.


All file names must be in lower case...


I worked at a place that had a merger between 2 companies. The 'dominant' one had a major server written in K&R C (i.e. pre-ANSI). They forced the Java teams (from both offices -- probably 20 devs total) to use this format, which gleefully ignored the 2 pillars of the "brace debate" and goes straight to crazy:

if ( x == y ) 
    {
    System.out.println("this is painful");
    x = 0;
    y++;
    }

(Probably only funny in the uk)

An insurer I worked at wanted a combination "P" or "L" to denote the scope, concatenated with hungarian for the type, on all properties.

The plus point was we had a property called pintMaster! Made us all fancy a drink.


Marking private variables with an _ just to make sure that we know we are dealing with private variables within the class. Then using php's magic methods __get and __set to provide access to each of the variables as if they were public anyway...


All documents in my company are version-controlled. So far, so good.

But for EVERY single file, upon first committing to CVS, you must immediately add two tags to it: CRE (for CREation) and DEV001 (for 1st DEVelopment cycle). As if it being the first version of the file itself wasn't enough.

After that, the process gets a bit more reasonable, fortunately.


The one that got me was similar to the other poster's "tbl" prefix for SQL table names.

In this case, the prefix for all stored procedures was to be "sp_" despite the fact that "sp_" is a prefix used by Microsoft for system-level stored procedures in SQL Server. Well, they had their standards from an old, non-MS database and weren't about to change just because their standard might cause a stored procedure to collide with a system stored procedure and produce unpredictable results. No, that just wouldn't be proper.


Marking private variables with an _ just to make sure that we know we are dealing with private variables within the class. Then using php's magic methods __get and __set to provide access to each of the variables as if they were public anyway...


As I always worked self-employed/freelancer/project leader, I never got into someone's standards, all standards are my decisions. But, I recently found a fun piece of "coding standards document" back when I was 15:

All functions must be named "ProjectName_FunctionName".

Well, procedural PHP, anyone? Those weren't times of hard PHP OOP yet, but still. If I wanted to use code from one project to another, I would have to rewrite all references, etc.

I could have used something like "package_FunctionName".


There must be 165 unit tests (not necessarily automated) per 1000 lines of code. That works out at one test for roughly every 8 lines.

Needless to say, some of the lines of code are quite long, and functions return this pointers to allow chaining.


The Project i work for hard coding is a strict NO..So we were forced to hash define as below

#define 1 ONE


Our old c# coding standards required that we use huge, ugly comment blocks. You know in Code Complete where Steve McConnell gives a prime example of an ugly comment macro? That. Almost an exact match.

The worst thing about this was that c# is a language that already has good (and relatively unobtrusive) comment support.

You'd get something like this:

/// <summary>
/// Add an item to the collection
/// </summary>
/// <parameter name="item">The item to add</parameter>
/// <returns>Whether the addition succeeded</returns>
public bool Add(int item) { ... }

and it'd turn into this:

// ########################################################## //
/// <summary>
///     Add an item to the collection
/// </summary>
///     IN:  <parameter name="item">The item to add</parameter>
///     OUT: <returns>Whether the addition succeeded</returns>
// ########################################################## //

Note that StackOverflow's syntax highlighting does not do it justice, as with the default VS text scheme, the # symbol is bright green, resulting in an overpowering violation of your retinas.

I can only assume the authors were really, really fond of it from previous endeavours with C/C++. The problem was that, even if you just had a couple of auto properties, it'd take up about 50% of your screen space and add significant noise. The extra // lines also messed up R#'s refactoring support.

After we ditched the comment macro, we ended up spanking the whole codebase with a script that took us back to visual studio's default c# comment style.


I once had to spell out all acronyms, even industry standard ones such as OpenGL. Variable names such as glu were not good, but we had to use graphicsLibraryUtility.


Back in the 80's/90's, I worked for an aircraft simulator company that used FORTRAN. Our FORTRAN compiler had a limit of 8 characters for variable names. The company's coding standards reserved the first three of them for Hungarian-notation style info. So we had to try and create meaningful variable names with just 5 characters!


I've had a lot of stupid rules, but not a lot that I considered downright strange.

The sillyiest was on a NASA job I worked back in the early 90's. This was a huge job, with well over 100 developers on it. The experienced developers who wrote the coding standards decided that every source file should begin with a four letter acronym, and the first letter had to stand for the group that was responsible for the file. This was probably a great idea for the old FORTRAN 77 projects they were used to.

However, this was an Ada project, with a nice hierarchal library structure, so it made no sense at all. Every directory was full of files starting with the same letter, followed by 3 more nonsense leters, an underscore, and then part of the file name that mattered. All the Ada packages had to start with this same five-character wart. Ada "use" clauses were not allowed either (arguably a good thing under normal circumstances), so that meant any reference to any identifier that wasn't local to that source file also had to include this useless wart. There probably should have been an insurrection over this, but the entire project was staffed by junior programmers and fresh from college new hires (myself being the latter).

A typical assignment statement (already verbose in Ada) would end up looking something like this:

NABC_The_Package_Name.X := NABC_The_Package_Name.X + 
  CXYZ_Some_Other_Package_Name.Delta_X;

Fortunately they were at least enlightened enough to allow us more than 80 columns! Still, the facility wart was hated enough that it became boilerplate code at the top of everyone's source files to use Ada "renames" to get rid of the wart. There'd be one rename for each imported ("withed") package. Like this:

package Package_Name renames NABC_Package_Name;
package Some_Other_Package_Name renames CXYZ_Some_Other_Package_Name;
--// Repeated in this vein for an average of 10 lines or so

What the more creative among us took to doing was trying to use the wart to make an acutally sensible (or silly) package name. (I know what you are thinking, but explitives were not allowed and shame on you! That's disgusting). For example, I was in the Common code group, and I needed to make a package to interface with the Workstation group. After a brainstorming session with the Workstation guy, we decided to name our packages so that someone needing both would have to write:

with CANT_Interface_Package;
with WONT_Interface_Package;

The creator of the file (doesn't have to put any code in) has to put their name in the file. So if you create stubs or placeholders, you "own" them forever.

The guy who actually writes the code doesn't add his name; we had source control so that we'd know, always who to blame.


While coding for a VB project I was asked to add the following comment section for each of the methods

'Module Name
'Module Description
'Parameters and description of each parameter
'Called by
'Calls

While I found the rest quite alright but I was against the last two, the reason I argued was the as the project becomes large it will become difficult to maintain. If we are creating the library function then we can never be able to maintain Called by. We were small team of 6, so the argument made by manager was that since you are going to call the functions this should be maintained. Anyway I had to give up this argument as the manager was adamant. The result was as expected, as the project become larger no one cared to maintain Called by and Calls.


It was a coding standard I did not follow myself ( got in trouble for other things, but never that ). We had three 19" monitors, so we could have two editors open to full screen and still have access to the desktop. Everyone else did not use comments, but used meaningful names. Extremely long meaningful names. The longest I remember was in the 80 character range. The average was around 40~50.

Guess what, they didn't accurately describe the whole thing.


Every beginning and ending brace was required to have a comment:

public void HelloWorld(string name)
{

  if(name == "Joe")
  {
    Console.WriteLine("Hey, Joe!");
  } //if(name == "Joe")
  else
  {
    Console.WriteLine("Hello, " + name);
  } //if(name == "Joe")
} //public void HelloWorld(string name)

That's what led me to write my first Visual Studio plugin to automate that.


Wow -- this brings back so many memories of one particular place that I worked: Arizona Department of Transportation.

There was a project manager there that didn't understand object-based programming (and didn't want to understand it). She was convinced that object-based programming was a fad, and refused to let anybody check-in code that used any kind of object based programming.

(Seriously -- she actually spent a lot of her day reviewing code that we had checked-in to Visual SourceSafe just to make sure we weren't breaking the rules).

Considering Visual Basic 4 had just released (this was about 12 years ago), and considering that the Windows forms application we were building in VB4 used objects to describe the forms, this made development ... complicated.

A buddy of mine actually tried to get around this problem by encapsulating his 'object code' inside dummy 'forms' and she eventually caught on that he was just (* gasp *) hiding his objects!

Needless to say, I only lasted about 3 months there.

Gosh, I disliked that woman's thinking.


I absolutely hate it when someone doesn't use a naming convention. At where I worked, the lead developer (who I replaced) couldn't figure out if he wanted to use camelCase, or way_over_used_underscores. Personally, I hate the underscores and the camel case is easier to read, but it doesn't really matter as long as you keep to one standard.

PHP is especially bad at this, take a look at mysql_numrows which merges the two without the caps.


inserting line breaks
(//--------------------------------------------------------------------------------)
between methods in a c# project.


Back in my C++ days we were not allowed to use ==,>=, <=,&&, etc. there were macros for this ...

if (bob EQ 7 AND alice LEQ 10)
{
   // blah
}

this was obviously to deal with the "old accidental assignment in conditional bug", however we also had the rule "put constants before variables", so

if (NULL EQ ptr); //ok
if (ptr EQ NULL); //not ok

Just remembered, the simplest coding standard I ever heard was "Write code as if the next maintainer is a vicious psychopath who knows where you live."


An externally-written C coding standard that had the rule 'don't rely on built in operator precedence, always use brackets'

Fair enough, the obvious intent was to ban:

a = 3 + 6 * 2;

in favour of:

a = 3 + (6 * 2);

Thing was, this was enforced by a tool that followed the C syntax rules that '=', '==', '.' and array access are operators. So code like:

a[i].x += b[i].y + d - 7;

had to be written as:

((a[i]).x) += (((b[i]).y + d) - 7);

In a large group at my company, we use C++ almost exclusively. Passing by non-const reference is forbidden.

If you want to modify a parameter to a function, you must pass it by pointer.

We have an internal flame war over the pros (easier to identify function calls that can modify variables) and cons (ridiculousness; having to deal with possible NULL pointers when you want a parameter to be required) about once a year.


having to put m_ prefix on java instance variables and g_ prefix on java static variables, most un-Java idiot cruft I have ever had to deal with, perpetuated by C and C++ developers that didn't know how to use anything other than notepad to develop Java with!

except that nobody actually followed this except to put m_ on everything even statics even method names ...


I once had to spell out all acronyms, even industry standard ones such as OpenGL. Variable names such as glu were not good, but we had to use graphicsLibraryUtility.


The one that got me was similar to the other poster's "tbl" prefix for SQL table names.

In this case, the prefix for all stored procedures was to be "sp_" despite the fact that "sp_" is a prefix used by Microsoft for system-level stored procedures in SQL Server. Well, they had their standards from an old, non-MS database and weren't about to change just because their standard might cause a stored procedure to collide with a system stored procedure and produce unpredictable results. No, that just wouldn't be proper.


no single character variable names - even for a simple iterator like i. Had to use ii or something. I thought this was stupid.

Another one - perhaps the craziest of all, but maybe not a coding standard...

no STL allowed. and this was in 2007/2008. I left there soon after I found out about that nonsense. Apparently some idiots thought that there was no "standard" (As in 15 years ago...) I guess they missed the memo about stl being in the C++ standard...

Use of the stupid COM HRESULTs as return types for just about ALL methods - even if they are not COM. It was ludicrous. So now instead of returning some enumerated type or a useful value that indicates a result, etc, we had to look up what S_OK or E_FAIL or whatever meant in the context of each of the methods. Again, I left there shortly after that.


All documents in my company are version-controlled. So far, so good.

But for EVERY single file, upon first committing to CVS, you must immediately add two tags to it: CRE (for CREation) and DEV001 (for 1st DEVelopment cycle). As if it being the first version of the file itself wasn't enough.

After that, the process gets a bit more reasonable, fortunately.


Not quite a coding standard, but in 1998 I worked for a company where C++ was banned, in favour of C. This was because OO was considered too complex for the software engineers to grasp.

In our C code we were required to prefix all semi-colons with a space

int someInt = 5 ;

I could never find out a reason for this, but after a while it did grow on me.


My weirdest one was at a contract a couple years ago. @ZombieSheep's weird one was part of it, but not the weirdest one in that company.

No, the weirdest one in that company was the database naming scheme. Every table was named in all caps, with underscores between the words. Every table had a prefix (generally 1 - 6 characters) which was usually an acronym or an abbreviation of the main table name. Every field of the table was prefixed with the same prefix as well. So, let's say you have a simple schema where people can own cats or dogs. It'd look like this:

PER_PERSON
    PER_ID
    PER_NameFirst
    PER_NameLast
    ...
CAT_CAT
    CAT_ID
    CAT_Name
    CAT_Breed
    ...
DOG_DOG
    DOG_ID
    DOG_Name
    DOG_Breed
    ...
PERCD_PERSON_CAT_DOG (for the join data)
    PERCD_ID
    PERCD_PER_ID
    PERCD_CAT_ID
    PERCD_DOG_ID

That said, as weird as this felt initially ... It grew on me. The reasons behind it made sense (after you wrapped your brain around it), as the prefixes were there to be reminders of "recommended" (and enforced!) table aliases when building joins. The prefixing made the majority of join queries easier to write, as it was very rare that you'd have to explicitly reference a table before the field.

Heck, after a while, all of us on the team (6 people on our project) were able to begin referring to tables in conversation by nothing more than the prefix. An acquired taste, to be sure ... But one that grew on me. So much so that I still use it, when I have that freedom.


Writing methods comments with pointless information for almost all methods.

Not allowing multiple exit points from a method.

Hungarian notation for all variables, enums, structures and even classes, e.g. iMyInt, tagMySturcture, eMyEnum and CMyClass.


No Hungarian whatsoever.

OK, you're thinking this is bad why? Well, because they considered this to be Hungarian:

int foo;
int *pFoo;
int **hFoo;

Now, any old-school Mac programmer will remember dealing with Handles and Ptrs. The above is the easiest way to tell them apart - Apple sample code is full of it, and Apple was hardly a hotbed of Hungarianism. And so when I had to write some old-school Mac code, naturally I did that, and got it shot down for being Hungarian.

But nobody could propose an alternate naming scheme that preserved the clarity of three variables referring to the same data in different ways, so I checked it in as-is.


At the place I'm currently working, the official coding standard stipulates a maximum line length of eighty characters. The rational was to enable hard-copies of the code to be formatted. Needless to say, this led to very odd code layout. I've worked to eliminate this standard, mainly through the argument of 'when was the last time you made a hard-copy of code?' Readability now versus chance of making a hard-copy on an eighty column DMP?

Skizz


Not being able to use Reflection as the manager claimed it involved too much 'magic'.


The strangest one i saw was database table naming where the tables were prefaced with a TLA for functional area, eg accounting ACC then a 3 digit number to (overide the default sort) and then the table name.

Plus this was extended into the column names as well.

ACC100_AccountCode

it was a nightmare to read a query, they were so unreadable.


Once I had to do a little DLL out of my team and when it was done I had to redo the job because I shouldn't have had used "else" in the code. When I asked why I was instructed not to ask why, but the leader of the other team just "didn't get the else stuff".


Every beginning and ending brace was required to have a comment:

public void HelloWorld(string name)
{

  if(name == "Joe")
  {
    Console.WriteLine("Hey, Joe!");
  } //if(name == "Joe")
  else
  {
    Console.WriteLine("Hello, " + name);
  } //if(name == "Joe")
} //public void HelloWorld(string name)

That's what led me to write my first Visual Studio plugin to automate that.


Our Oracle DBA's are insisting that we prepend the schema name onto table names, ie if your schema is hr_admin, your staff table would be hr_admin_staff, meaning the full name of the table in a cross schema query would be hr_admin.hr_admin_staff.


If I remember correctly the delphi IDE did a default indent of two spaces. Most of the legacy code for the company had three spaces and was written by the VP IT and the CEO. One day, all the programmers were talking about what we should do to make our lives easier and a contractor who knew Delphi pretty well said, "Hey the ide defaults to two spaces does anyone have a problem with us doing this going forward for new code?" All of us looked at each other, and pretty much thought it was a no brainer and said that we agreed.

Two days later the VP and CEO found out we were going to make such a dangerous change that could "cause problems" and instructed us that we would be using three indents for everything until the two of them could accurately evaluate the impact of such a change. Now I am all for following standards, but these are the same people who thought oo programming was creating an object with one function that had all of the logic necessary to perform an action, and that source control was moving the code files to a different directory.


I ran into two rules that I really hated on a C job a few years ago:

  1. "One module per file," where "module" was defined as a C function.

  2. Function-local variables allowed only at the top of the function, so this sort of thing was illegal:

if (test)
{
   int i;
   ...
}

No ternary operator allowed where I currently work:

int value = (a < b) ? a : b;

... because not everyone "gets it". If you told me, "Don't use it because we've had to rewrite them when the structures get too complicated" (nested ternary operators, anyone?), then I'd understand. But when you tell me that some developers don't understand them... um... Sure.


The strangest was that type qualified variable naming must be used in Java, and the types where those of the columns from the database. So a java.sql.ResultSet had to be called tblClient etc.


Maybe not the most outlandish one you'll get, but I really really hate when I have to preface database table names with 'tbl'


The first programming job I had was with a Microsoft QuickBASIC 4.5 shop. The lead developer had been working in BASIC just about forever, so most of the advanced (!) features of QuickBASIC were off-limits because they were new and he didn't understand them. So:

  • No Sub/End Sub procedures. Everything was done with GOSUB
  • We were allowed to not number lines that weren't the target of GOTO or GOSUB. But GOTO targets has to be a numeric label, not a name.
  • Targets of GOSUB were allowed to be named, but the name had to prefixed by 'S' and a four digit number. All subroutines had to have the four digit number sorted in order in the source file. So a typical routine might be S1135InitializePrinter. You'd have to go and find the right routine to get the number, there were enough that you couldn't hope to remember them all.
  • No block IF/END IF. All IFs had to have either a single GOTO or GOSUB as the conditional statement.

That was a really fun job. No, seriously.


Not being able to use Reflection as the manager claimed it involved too much 'magic'.


We have a no code past the 80th character column that is controversial in our C++ development team. Liked and code review enforced by some; Despised by others.

Also, we have a very controversial C++ throw(), throw(...) specification standard. Religiously used by some and demonized by others. Both camps cite discussions and experts to enforce their respective positions.


My old boss insisted that we use constants instead of enums but never gave a reason and in all the scenarios these were used an enum made more sense.

The better one though was insisting that all table names be singular and then making the classes in code singular as well. But not only did they represent the object, such as a user or group, they also represented the table and contained all of the CRUD for that table and numerous other actions. But wait, there’s more! They also had to contain a publicly visible name/value collection so that way you could get the properties with an indexer, by column name, just in case you added a new column but didn't want to add in a new property. There were a bunch of other "must do's" that not only didn't make sense, but put a big performance hit on the code as well. I could try to point them all out but the code speaks for itself and sadly this is almost an exact copy of the User class I just pulled out of an old archive folder:

public class Record
{
    private string tablename;
    private Database database;

    public NameValueCollection Fields;

    public Record(string TableName) : this(TableName, null) { }
    public Record(string TableName, Database db)
    {
        tablename = TableName;
        database = db;
    }

    public string TableName
    {
        get { return tablename; }
    }

    public ulong ID
    {
        get { return GetULong("ID"); }
        set { Fields["ID"] = value.ToString(); }

    }

    public virtual ulong GetULong(string field)
    {
        try { return ulong.Parse(this[field]); }
        catch(Exception) { return 0; }
    }

    public virtual bool Change()
    {
        InitializeDB(); // opens the connection
        // loop over the Fields object and build an update query
        DisposeDB(); // closes the connection
        // return the status
    }

    public virtual bool Create()
    {
        // works almost just like the Change method
    }

    public virtual bool Read()
    {
        InitializeDB(); // opens the connection
        // use the value of the ID property to build a select query
        // populate the Fields collection with the columns/values if the read was successful
        DisposeDB(); // closes the connection
        // return the status    
    }
}

public class User
{
    public User() : base("User") { }
    public User(Database db) : base("User", db) { }

    public string Username
    {
        get { return Fields["Username"]; }
        set
        {
            Fields["Username"] = value.ToString(); // yes, there really is a redundant ToString call
        }
    }
}

sorry if this double posts, first time around I might not of been human or maybe the site just has a limit to how bad code can be to be posted


I am not allowed to use this-> to reference local variables in our c++ code...


At a former job:

  • "Normal" tables begin with T_
  • "System" tables (usually lookups) begin with TS_ (except when they don't because somebody didn't feel like it that day)
  • Cross-reference tables begin with TSX_
  • All field names begin with F_

Yes, that's right. All of the fields, in every single table. So that we can tell it's a field.


One that no one has mentioned is being forced to write unit tests for classes that are brainless getters and setters.


We have to put a comment above every sql statement. So, you may have an sql statement as such

Select USER_ID FROM USERS WHERE NAME = :NAME;

And you still have to have a comment above it that would say:

Select USER_ID from the USERS table, where name equals the name entered.

Now, when the actual comment is longer than the code, and the code is simple enough for a second grader to read, i really don't see the point of commenting... But, alas, I have had to go back and add comments to statements just like this.

This has been on a mainframe, coding in cobol. Team size is usually about 4 or 5, but this rule has bitten everyone here from time to time.


In C++, we had to write explicitly everything that the compiler is supposed to write for us (default constructor, destructor, copy constructor, copy assignment operator) for every class. Looks like whoever wrote the standards was not very confident on the language.


This isn't a coding standard issue, but is surely a tale of restrictive thinking. We had completed a short 4 week project in no less than 7 weeks. The schedule was loosely based on guestimating a feature list. The development process consisted of coding furiously. During the postmortem I suggested using milestones and breaking feature requests into tasks. Incredibly, my director dismissed my ideas, saying that because it was such a short project, we didn't need to use milestones or tasks, and asked for other suggestions. The room fell silent.

Language: Java, C++, HTML Team size: Two teams, totaling 10 engineers Which ill effects it caused you and your team: I felt like I was caught in a Dilbert cartoon.


The worst coding standard I've ever had to live with was insane indentation.

The code had originally been written on a mainframe using 60x80 character green-screen terminals (this was quite a long time ago). The default tab size on these things was 8 characters, but the programmers at the time decided that was too big - the screen itself only showed 80 characters across, so an 8-character tab wasted a lot of space.

So they decided to set the intent size for their code to 4 characters.

All fair enough, you say. Except that they didn't do it by changing the tab size. They did it by making the first indentation to be 4 spaces, the second one to be a single tab character, and so on alternating between adding 4 spaces and a tab character.

While they stuck to the green screen terminals, this was fine. Weird, but fine.

The real chaos began when the development team got their shiny new Windows PCs.

The PC editor they chose had its tab size set to 4 characters, and so when the code was loaded, the indentation was simply all over the place.

We couldn't fix the indentation because some devs were still using the green screens, so for the year or so that it took to get the entire team transitioned to PCs, we had an absolute nightmare trying to work with code that was virtually unreadable in either one environment or the other (or more frequently, both).


I once worked under the tyranny of the Mighty VB King.

The VB King was the pure master of MS Excel and VBA, as well as databases (Hence his surname : He played with Excel while the developers worked with compilers, and challenging him on databases could have detrimental effects on your career...).

Of course, his immense skills gave him an unique vision of development problems and project management solutions: While not exactly coding standards in the strictest sense, the VB King regularly had new ideas about "coding standards" and "best practices" he tried (and oftentimes succeeded) to impose on us. For example:

  • All C/C++ arrays shall start at index 1, instead of 0. Indeed, the use of 0 as first index of an array is obsolete, and has been superseded by Visual Basic 6's insightful array index management.

  • All functions shall return an error code: There are no exceptions in VB6, so why would we need them at all? (i.e. in C++)

  • Since "All functions shall return an error code" is not practical for functions returning meaningful types, all functions shall have an error code as first [in/out] parameter.

  • All our code will check the error codes (this led to the worst case of VBScript if-indentation I ever saw in my career... Of course, as the "else" clauses were never handled, no error was actually found until too late).

  • Since we're working with C++/COM, starting this very day, we will code all our DOM utility functions in Visual Basic.

  • ASP 115 errors are evil. For this reason, we will use On Error Resume Next in our VBScript/ASP code to avoid them.

  • XSL-T is an object oriented language. Use inheritance to resolve your problems (dumb surprise almost broke my jaw open this one day).

  • Exceptions are not used, and thus should be removed. For this reason, we will uncheck the checkbox asking for destructor call in case of exception unwinding (it took days for an expert to find the cause of all those memory leaks, and he almost went berserk when he found out they had willingly ignored (and hidden) his technical note about checking the option again, sent handfuls of weeks before).

  • catch all exceptions in the COM interface of our COM modules, and dispose them silently (this way, instead of crashing, a module would only appear to be faster... Shiny!... As we used the über error handling described above, it even took us some time to understand what was really happening... You can't have both speed and correct results, can you?).

  • Starting today, our code base will split into four branches. We will manage their synchronization and integrate all bug corrections/evolutions by hand.

All but the C/C++ arrays, VB DOM utility functions and XSL-T as OOP language were implemented despite our protests. Of course, over the time, some were discovered, ahem, broken, and abandoned altogether.

Of course, the VB King credibility never suffered for that: Among the higher management, he remained a "top gun" technical expert...

This produced some amusing side effects, as you can see by following the link What is the best comment in source code you have ever encountered?


I completly disagree with this one, but I was forced to follow it:

"All HTML LINKS will ALWAYS be underlined."

A while back I explained why I disagree on my blog.

Note: Even Stackoverflow ONLY underlines links when you move the mouse over them.


I once worked on a VB.NET project where every method body was wrapped in the following Try...Catch block:

Public Sub MyMethod()
    Try
        ' Whatever
    Catch Ex As Exception
        Throw New Exception("MyClass::MyMethod::" + Ex.ToString())
    End Try
End Sub

Those who do not understand Exception.StackTrace are doomed to reinvent it, badly.


Although this wasn't at a job, we had a massive project for a class in college. One of the requirements was commenting every line of code in our application -- regardless of what it did... and each line had to be specific e.g.

int x=0; //declare variable x and assign it to 0

We weren't allowed to do this:

int x, y, z = 0; //declare and assign to 0

As it wasn't detailed enough. And that's not even following the naming conventions forced upon us.

Needless to say we spent a few hours going back through the code...


I've been getting worked up over naming table columns after mysql keywords. It requires stupid column name escaping in every single query you write.

SELECT this, that, `key` FROM sometable WHERE such AND suchmore;

Just horrible.


Adding an 80 character comment at the end of each method so it is easy to find the end of the method. Like this:

void doSomething()
{
}
//----------------------------------------------------------------------------

The rationale being that:

  • some users don't use IDE's that have code folding (Ok I will give them that).
  • a space between methods is not clear since people may not follow the other coding standards about indenting and brace placement, hence it would be hard to find the end of a function. (Not releavent; if you need to add this because people don't follow your coding standard then why should they follow this one?)

It was a coding standard I did not follow myself ( got in trouble for other things, but never that ). We had three 19" monitors, so we could have two editors open to full screen and still have access to the desktop. Everyone else did not use comments, but used meaningful names. Extremely long meaningful names. The longest I remember was in the 80 character range. The average was around 40~50.

Guess what, they didn't accurately describe the whole thing.


at my previous job, which I gladly quit 3 months ago:

database:

  • Table names had to be uppercase.
  • Table names had to be prefixed TBL_
  • Fields had to be prefixed: DS_ (for varchar, which made no sense) NU_ for numbers CD_ for ("bit fields") DT_ for dates
  • database fields had also to be uppercase [CD_ENABLED]
  • same with sp names [SP_INFINITY_USER_GROUPS_QRY] and database names [INFINITY]
  • did I mention sp names were actually like that? SP_ prefix, then database name SP_INFINITY_ then table name, SP_INFINITY_USER_GROUPS then what the sp was actually expected to do (QRY,UPD,DEL,INS) jesus, don't even get me started on queries that weren't just CRUD queries.
  • all text fields had to be varchar(MAX), unequivocally.
  • numbers were either int or double, even when you could have used other type.
  • "boolean" fields (bit) were int, no reason.
  • stored procedures had to be prefixed sp_productname_

asp.net / c# / javascript

  • EVERY single function had to be wrapped in try{}catch{}, so the applications wouldn't "explode" (at least that was the official reason), even when this produced things not working and not having a clue why.
  • parameters must be prefixed with p, e.g pCount, pPage
  • scope variables had to be prefixed with w (as in "working", what the hell does that even mean?)
  • statics with g, etc.
  • everything post framework 1.1 was offlimits, like you had any real uses for linq and generics anyways. (I made it a point to enforce them to let me use jquery though, I succeded at that, at least).

You must use only five letter table names and the last two character is reserved for IO.


Back in the 80's/90's, I worked for an aircraft simulator company that used FORTRAN. Our FORTRAN compiler had a limit of 8 characters for variable names. The company's coding standards reserved the first three of them for Hungarian-notation style info. So we had to try and create meaningful variable names with just 5 characters!


The first language I used professionally was 4D. It supported interprocess variables prefixed by a <>, process variables with no prefixes and local variables which started with a $. All those prefixes (or lack thereof) are used by the compiler/interpreter to determine the variable's scope.

The actual strange coding standard was some sort of hungarian notation. The catch was that instead of naming variables based on their types, they had to be prefixed according to their scope.

Variables, whose scope were determined by their prefix, had to be prefixed with redundant information!

I don't dare ask the guy responsible for the standards why it had to be this way...


I've had a lot of stupid rules, but not a lot that I considered downright strange.

The sillyiest was on a NASA job I worked back in the early 90's. This was a huge job, with well over 100 developers on it. The experienced developers who wrote the coding standards decided that every source file should begin with a four letter acronym, and the first letter had to stand for the group that was responsible for the file. This was probably a great idea for the old FORTRAN 77 projects they were used to.

However, this was an Ada project, with a nice hierarchal library structure, so it made no sense at all. Every directory was full of files starting with the same letter, followed by 3 more nonsense leters, an underscore, and then part of the file name that mattered. All the Ada packages had to start with this same five-character wart. Ada "use" clauses were not allowed either (arguably a good thing under normal circumstances), so that meant any reference to any identifier that wasn't local to that source file also had to include this useless wart. There probably should have been an insurrection over this, but the entire project was staffed by junior programmers and fresh from college new hires (myself being the latter).

A typical assignment statement (already verbose in Ada) would end up looking something like this:

NABC_The_Package_Name.X := NABC_The_Package_Name.X + 
  CXYZ_Some_Other_Package_Name.Delta_X;

Fortunately they were at least enlightened enough to allow us more than 80 columns! Still, the facility wart was hated enough that it became boilerplate code at the top of everyone's source files to use Ada "renames" to get rid of the wart. There'd be one rename for each imported ("withed") package. Like this:

package Package_Name renames NABC_Package_Name;
package Some_Other_Package_Name renames CXYZ_Some_Other_Package_Name;
--// Repeated in this vein for an average of 10 lines or so

What the more creative among us took to doing was trying to use the wart to make an acutally sensible (or silly) package name. (I know what you are thinking, but explitives were not allowed and shame on you! That's disgusting). For example, I was in the Common code group, and I needed to make a package to interface with the Workstation group. After a brainstorming session with the Workstation guy, we decided to name our packages so that someone needing both would have to write:

with CANT_Interface_Package;
with WONT_Interface_Package;

There must be 165 unit tests (not necessarily automated) per 1000 lines of code. That works out at one test for roughly every 8 lines.

Needless to say, some of the lines of code are quite long, and functions return this pointers to allow chaining.


"The guys who wrote the compiler are probably a lot smarter than you so don't try something clever" is what one guide line document said (not quite literally).


The team size was about a dozen. For C# methods we had to put a huge XML formatted function before every function. I don't remember the format exactly but it involved XML tags nested about three to five levels deep. Here's a sketch from memory of the comment.

/// <comment>
/// </comment>
/// <table>
///    <thead>
///       <tcolumns>
///          <column>Date</column>
///          <column>Modified By</column>
///          <column>Comment</column>
///       </tcolumns>
///    </thead>
///    <rows>
///       <row>
///          <column>10/10/2006</column>
///          <column>Fred</column>
///          <column>Created function</column>
///       </row>
///    </rows>
/// <parameters>

I've got to stop there....

The downsides were many.

  • Files were made up mostly of comments.
  • We were not using our version control system for tracking changes to files.
  • Writing many small functions hurt readability.
  • Lots of scrolling.
  • Some people did not update the comments.

I used a code snippet (Emacs YAS) to add this code to my methods.


I absolutely hate it when someone doesn't use a naming convention. At where I worked, the lead developer (who I replaced) couldn't figure out if he wanted to use camelCase, or way_over_used_underscores. Personally, I hate the underscores and the camel case is easier to read, but it doesn't really matter as long as you keep to one standard.

PHP is especially bad at this, take a look at mysql_numrows which merges the two without the caps.


Forbidden:

while (true) {

Allowed:

for (;;) {

In 1987 or so, I took a job with a company that hired me because I was one of a small handful of people who knew how to use Revelation. Revelation, if you've never heard of it, was essentially a PC-based implementation of the Pick operating system - which, if you've never heard of it, got its name from its inventor, the fabulously-named Dick Pick. Much can be said about the Pick OS, most of it good. A number of supermini vendors (Prime and MIPS, at least) used Pick, or their own custom implementations of it.

This company was a Prime shop, and for their in-house systems they used Information. (No, that was really its name: it was Prime's implementation of Pick.) They had a contract with the state to build a PC-based system, and had put about a year into their Revelation project before the guy doing all the work, who was also their MIS director, decided he couldn't do both jobs anymore and hired me.

At any rate, he'd established a number of coding standards for their Prime-based software, many of which derived from two basic conditions: 1) the use of 80-column dumb terminals, and 2) the fact that since Prime didn't have a visual editor, he'd written his own. Because of the magic portability of Pick code, he'd brought his editor down into Revelation, and had built the entire project on the PC using it.

Revelation, of course, being PC-based, had a perfectly good full-screen editor, and didn't object when you went past column 80. However, for the first several months I was there, he insisted that I use his editor and his standards.

So, the first standard was that every line of code had to be commented. Every line. No exceptions. His rationale for that was that even if your comment said exactly what you had just written in the code, having to comment it meant you at least thought about the line twice. Also, as he cheerfully pointed out, he'd added a command to the editor that formatted each line of code so that you could put an end-of-line comment.

Oh, yes. When you commented every line of code, it was with end-of-line comments. In short, the first 64 characters of each line were for code, then there was a semicolon, and then you had 15 characters to describe what your 64 characters did. In short, we were using an assembly language convention to format our Pick/Basic code. This led to things that looked like this:

EVENT.LIST[DATE.INDEX][-1] = _         ;ADD THE MOST RECENT EVENT
   EVENTS[LEN(EVENTS)]                 ;TO THE END OF EVENT LIST

(Actually, after 20 years I have finally forgotten R/Basic's line-continuation syntax, so it may have looked different. But you get the idea.)

Additionally, whenever you had to insert multiline comments, the rule was that you use a flower box:

************************************************************************
**  IN CASE YOU NEVER HEARD OF ONE, OR COULDN'T GUESS FROM ITS NAME,  **
**  THIS IS A FLOWER BOX.                                             **
************************************************************************

Yes, those closing asterisks on each line were required. After all, if you used his editor, it was just a simple editor command to insert a flower box.

Getting him to relent and let me use Revelation's built-in editor was quite a battle. At first he was insistent, simply because those were the rules. When I objected that a) I already knew the Revelation editor b) it was substantially more functional than his editor, c) other Revelation developers would have the same perspective, he retorted that if I didn't train on his editor I wouldn't ever be able to work on the Prime codebase, which, as we both knew, was not going to happen as long as hell remained unfrozen over. Finally he gave in.

But the coding standards were the last to go. The flower-box comments in particular were a stupid waste of time, and he fought me tooth and nail on them, saying that if I'd just use the right editor maintaining them would be perfectly easy. (The whole thing got pretty passive-aggressive.) Finally I quietly gave in, and from then on all of the code I brought to code reviews had his precious flower-box comments.

One day, several months into the job, when I'd pretty much proven myself more than competent (especially in comparison with the remarkable parade of other coders that passed through that office while I worked there), he was looking over my shoulder as I worked, and he noticed I wasn't using flower-box comments. Oh, I said, I wrote a source-code formatter that converts my comments into your style when I print them out. It's easier than maintaining them in the editor. He opened his mouth, thought for a moment, closed it, went away, and we never talked about coding standards again. Both of our jobs got easier after that.


Our old c# coding standards required that we use huge, ugly comment blocks. You know in Code Complete where Steve McConnell gives a prime example of an ugly comment macro? That. Almost an exact match.

The worst thing about this was that c# is a language that already has good (and relatively unobtrusive) comment support.

You'd get something like this:

/// <summary>
/// Add an item to the collection
/// </summary>
/// <parameter name="item">The item to add</parameter>
/// <returns>Whether the addition succeeded</returns>
public bool Add(int item) { ... }

and it'd turn into this:

// ########################################################## //
/// <summary>
///     Add an item to the collection
/// </summary>
///     IN:  <parameter name="item">The item to add</parameter>
///     OUT: <returns>Whether the addition succeeded</returns>
// ########################################################## //

Note that StackOverflow's syntax highlighting does not do it justice, as with the default VS text scheme, the # symbol is bright green, resulting in an overpowering violation of your retinas.

I can only assume the authors were really, really fond of it from previous endeavours with C/C++. The problem was that, even if you just had a couple of auto properties, it'd take up about 50% of your screen space and add significant noise. The extra // lines also messed up R#'s refactoring support.

After we ditched the comment macro, we ended up spanking the whole codebase with a script that took us back to visual studio's default c# comment style.


Using generic numbered identifier names

At my current work we have two rules which are really mean:

Rule 1: Every time we create a new field in a database table we have to add additional reserve fields for future use. These reserve fields are numbered (because no one knows which data they will hold some day) The next time we need a new field we first look for an unused reserve field.

So we end up with with customer.reserve_field_14 containing the e-mail address of the customer.

At one day our boss thought about introducing reserve tables, but fortunatly we could convince him not to do it.

Rule 2: One of our products is written in VB6 and VB6 has a limit of the total count of different identifier names and since the code is very large, we constantly run into this limit. As a "solution" all local variable names are numbered:

  • Lvarlong1
  • Lvarlong2
  • Lvarstr1
  • ...

Although that effectively circumvents the identifier limit, these two rules combined lead to beautiful code like this:

...

If Lvarbool1 Then
  Lvarbool2 = True
End If

If Lvarbool2 Or Lvarstr1 <> Lvarstr5 Then
  db.Execute("DELETE FROM customer WHERE " _ 
      & "reserve_field_12 = '" & Lvarstr1 & "'")
End If

...

You can imagine how hard it is to fix old or someone else's code...

Latest update: Now we are also using "reserve procedures" for private members:

Private Sub LSub1(Lvarlong1 As Long, Lvarstr1 As String)
  If Lvarlong1 >= 0 Then 
    Lvarbool1 = LFunc1(Lvarstr1)
  Else
    Lvarbool1 = LFunc6()
  End If
  If Lvarbool1 Then
    LSub4 Lvarstr1
  End If
End Sub

EDIT: It seems that this code pattern is becoming more and more popular. See this The Daily WTF post to learn more: Astigmatism :)


Several WTF's in one VB6 shop (I'm not proud, I was hungry and needed to eat) back in 2002 - 2004.

The most annoying IMHO, was setting all object references to nothing at the end of the sub/function. This was to "help" the compiler reference count. It didn't matter how many tests I performed for the TA to prove it wasn't necessary, Oh no, it still had to be done, even though he had absoutely no evidence to back him up what so ever. Eventually I gave up and about a year later found an article explaining why it was pants. I bring this to the TA thinking "Got the fecker!". He goes "Yeah, I've known about that for years, but if you start changing the standard the sheep " meaning other developers, the people he worked with everyday "will screw it up". Gob sh1te.

Others in the same shop.

  • Never delete code, always comment it out (even though we were using source control).
  • Prefixes on table names that were meaningless when I got there, but had to be enforced on new tables.
  • Prefixing all objects with o_ (lo_ for procedure level references, mo_ for module, go_ for global). Absoutely pointless in a project where every other variable was an object reference.

Mostly I was writing c++ there (only c++ developer, so made own standards, and enforced with rigor!) with occasional vb, otherwise I wouldn't have lasted.


The one that got me was similar to the other poster's "tbl" prefix for SQL table names.

In this case, the prefix for all stored procedures was to be "sp_" despite the fact that "sp_" is a prefix used by Microsoft for system-level stored procedures in SQL Server. Well, they had their standards from an old, non-MS database and weren't about to change just because their standard might cause a stored procedure to collide with a system stored procedure and produce unpredictable results. No, that just wouldn't be proper.


An externally-written C coding standard that had the rule 'don't rely on built in operator precedence, always use brackets'

Fair enough, the obvious intent was to ban:

a = 3 + 6 * 2;

in favour of:

a = 3 + (6 * 2);

Thing was, this was enforced by a tool that followed the C syntax rules that '=', '==', '.' and array access are operators. So code like:

a[i].x += b[i].y + d - 7;

had to be written as:

((a[i]).x) += (((b[i]).y + d) - 7);

Hungarian notation in general.


I once had to spell out all acronyms, even industry standard ones such as OpenGL. Variable names such as glu were not good, but we had to use graphicsLibraryUtility.


In a large group at my company, we use C++ almost exclusively. Passing by non-const reference is forbidden.

If you want to modify a parameter to a function, you must pass it by pointer.

We have an internal flame war over the pros (easier to identify function calls that can modify variables) and cons (ridiculousness; having to deal with possible NULL pointers when you want a parameter to be required) about once a year.


Applying s_ to variables and methods which were deemed "safety critical" for software that was part of a control system. Couple this with the other rule about putting m_ on the front of member variables and you'd get something ridiculous like "s_m_blah()", which is darn annoying to write and not very readable in my opinion. In the end some 'safety expert' was supposed to gain insight by looking at the code and determining something from it by using those "s_" - in practice, they didn't know c++ too well so they couldn't do much other than make reports on the number of identifiers that we'd marked as 'safety critical'. Utter nonsense...


Hungarian notation in general.


Maybe not the most outlandish one you'll get, but I really really hate when I have to preface database table names with 'tbl'


It was a coding standard I did not follow myself ( got in trouble for other things, but never that ). We had three 19" monitors, so we could have two editors open to full screen and still have access to the desktop. Everyone else did not use comments, but used meaningful names. Extremely long meaningful names. The longest I remember was in the 80 character range. The average was around 40~50.

Guess what, they didn't accurately describe the whole thing.


I worked in a VB .NET shop three years ago, where the "technical lead" decreed that all methods accepting a reference type parameter (i.e., an object) must use ByRef instead of ByVal. I found this especially odd because they'd asked me the ByVal/ByRef-what's-the-difference question in my interview, and I explained how it worked for value types and for reference types.

His explanation for the practice: "Some of the newer, less-experienced devs will get confused otherwise."

At the time, I was the most recently hired, and it was my first permanent .NET job. And I wasn't confused by it.


Adding an 80 character comment at the end of each method so it is easy to find the end of the method. Like this:

void doSomething()
{
}
//----------------------------------------------------------------------------

The rationale being that:

  • some users don't use IDE's that have code folding (Ok I will give them that).
  • a space between methods is not clear since people may not follow the other coding standards about indenting and brace placement, hence it would be hard to find the end of a function. (Not releavent; if you need to add this because people don't follow your coding standard then why should they follow this one?)

"The guys who wrote the compiler are probably a lot smarter than you so don't try something clever" is what one guide line document said (not quite literally).


The creator of the file (doesn't have to put any code in) has to put their name in the file. So if you create stubs or placeholders, you "own" them forever.

The guy who actually writes the code doesn't add his name; we had source control so that we'd know, always who to blame.


I had to spell and grammar check my comments. They had to be complete sentences, properly capitalized and finished with a period.


I completly disagree with this one, but I was forced to follow it:

"All HTML LINKS will ALWAYS be underlined."

A while back I explained why I disagree on my blog.

Note: Even Stackoverflow ONLY underlines links when you move the mouse over them.


Doing all database queries via stored procedures in Sql Server 2000. From complex multi-table queries to simple ones like:

select id, name from people

The arguments in favor of procedures were:

  • Performance
  • Security
  • Maintainability

I know that the procedure topic is quite controversial, so feel free to score my answer negatively ;)


Totally useless database naming conventions. Every table name has to start with a number. The numbers show which kind of data is in the table.

  • 0: data that is used everywhere
  • 1: data that is used by a certain module only
  • 2: lookup table
  • 3: calendar, chat and mail
  • 4: logging

This makes it hard to find a table if you only know the first letter of its name. Also - as this is a mssql database - we have to surround tablenames with square brackets everywhere.

-- doesn't work
select * from 0examples;

-- does work
select * from [0examples];

All file names must be in lower case...


My weirdest one was at a contract a couple years ago. @ZombieSheep's weird one was part of it, but not the weirdest one in that company.

No, the weirdest one in that company was the database naming scheme. Every table was named in all caps, with underscores between the words. Every table had a prefix (generally 1 - 6 characters) which was usually an acronym or an abbreviation of the main table name. Every field of the table was prefixed with the same prefix as well. So, let's say you have a simple schema where people can own cats or dogs. It'd look like this:

PER_PERSON
    PER_ID
    PER_NameFirst
    PER_NameLast
    ...
CAT_CAT
    CAT_ID
    CAT_Name
    CAT_Breed
    ...
DOG_DOG
    DOG_ID
    DOG_Name
    DOG_Breed
    ...
PERCD_PERSON_CAT_DOG (for the join data)
    PERCD_ID
    PERCD_PER_ID
    PERCD_CAT_ID
    PERCD_DOG_ID

That said, as weird as this felt initially ... It grew on me. The reasons behind it made sense (after you wrapped your brain around it), as the prefixes were there to be reminders of "recommended" (and enforced!) table aliases when building joins. The prefixing made the majority of join queries easier to write, as it was very rare that you'd have to explicitly reference a table before the field.

Heck, after a while, all of us on the team (6 people on our project) were able to begin referring to tables in conversation by nothing more than the prefix. An acquired taste, to be sure ... But one that grew on me. So much so that I still use it, when I have that freedom.


What drives me nuts is people suffixing the ID field of a table with the name of the table. What the hell is wrong with just ID? You're going to have to alias it anyway... for the love of all that is sacred!

Imagine what your SQL statements look like when you've got id fields called IDSEWEBLASTCUSTOMERACTION and IDSEEVENTLOGGER.


Almost any kind of hungarian notation.

The problem with hungarian notation is that it is very often misunderstood. The original idea was to prefix the variable so that the meaning was clear. For example:

int appCount = 0; // Number of apples.
int pearCount = 0; // Number of pears.

But most people use it to determine the type.

int iAppleCount = 0; // Number of apples.
int iPearCount = 0;  // Number of pears.

This is confusing, because although both numbers are integers, everybody knows, you can't compare apples with pears.


One that no one has mentioned is being forced to write unit tests for classes that are brainless getters and setters.


Only one variable can be declared per logical line. [Rationale: Multiple declarations per line results in an inaccurate line-of-code count.]


at my previous job, which I gladly quit 3 months ago:

database:

  • Table names had to be uppercase.
  • Table names had to be prefixed TBL_
  • Fields had to be prefixed: DS_ (for varchar, which made no sense) NU_ for numbers CD_ for ("bit fields") DT_ for dates
  • database fields had also to be uppercase [CD_ENABLED]
  • same with sp names [SP_INFINITY_USER_GROUPS_QRY] and database names [INFINITY]
  • did I mention sp names were actually like that? SP_ prefix, then database name SP_INFINITY_ then table name, SP_INFINITY_USER_GROUPS then what the sp was actually expected to do (QRY,UPD,DEL,INS) jesus, don't even get me started on queries that weren't just CRUD queries.
  • all text fields had to be varchar(MAX), unequivocally.
  • numbers were either int or double, even when you could have used other type.
  • "boolean" fields (bit) were int, no reason.
  • stored procedures had to be prefixed sp_productname_

asp.net / c# / javascript

  • EVERY single function had to be wrapped in try{}catch{}, so the applications wouldn't "explode" (at least that was the official reason), even when this produced things not working and not having a clue why.
  • parameters must be prefixed with p, e.g pCount, pPage
  • scope variables had to be prefixed with w (as in "working", what the hell does that even mean?)
  • statics with g, etc.
  • everything post framework 1.1 was offlimits, like you had any real uses for linq and generics anyways. (I made it a point to enforce them to let me use jquery though, I succeded at that, at least).

The creator of the file (doesn't have to put any code in) has to put their name in the file. So if you create stubs or placeholders, you "own" them forever.

The guy who actually writes the code doesn't add his name; we had source control so that we'd know, always who to blame.


Our Oracle DBA's are insisting that we prepend the schema name onto table names, ie if your schema is hr_admin, your staff table would be hr_admin_staff, meaning the full name of the table in a cross schema query would be hr_admin.hr_admin_staff.


Writing methods comments with pointless information for almost all methods.

Not allowing multiple exit points from a method.

Hungarian notation for all variables, enums, structures and even classes, e.g. iMyInt, tagMySturcture, eMyEnum and CMyClass.


We have to put a comment above every sql statement. So, you may have an sql statement as such

Select USER_ID FROM USERS WHERE NAME = :NAME;

And you still have to have a comment above it that would say:

Select USER_ID from the USERS table, where name equals the name entered.

Now, when the actual comment is longer than the code, and the code is simple enough for a second grader to read, i really don't see the point of commenting... But, alas, I have had to go back and add comments to statements just like this.

This has been on a mainframe, coding in cobol. Team size is usually about 4 or 5, but this rule has bitten everyone here from time to time.


We had to sort all the functions in classes alphabetically, to make them "easier to find". Never mind the ide had a drop down. That was too many clicks.

(same tech lead wrote an app to remove all comments from our source code).


The worst I've experienced was to do with code inspections. For some reason even though we had and used the diff tool of our vcs to see what had changed, when you wanted your code inspected you had to surround your changes in a file/function with some comment blocks like so:

/*********...80charswide...***
 * START INSPECT
 */

 some changed code...

 /*
  * END INSPECT
  *********...80charswide...****/

After the inspection you'd have to go back and remove all those comment blocks before committing. ugh.


no single character variable names - even for a simple iterator like i. Had to use ii or something. I thought this was stupid.

Another one - perhaps the craziest of all, but maybe not a coding standard...

no STL allowed. and this was in 2007/2008. I left there soon after I found out about that nonsense. Apparently some idiots thought that there was no "standard" (As in 15 years ago...) I guess they missed the memo about stl being in the C++ standard...

Use of the stupid COM HRESULTs as return types for just about ALL methods - even if they are not COM. It was ludicrous. So now instead of returning some enumerated type or a useful value that indicates a result, etc, we had to look up what S_OK or E_FAIL or whatever meant in the context of each of the methods. Again, I left there shortly after that.


Totally useless database naming conventions. Every table name has to start with a number. The numbers show which kind of data is in the table.

  • 0: data that is used everywhere
  • 1: data that is used by a certain module only
  • 2: lookup table
  • 3: calendar, chat and mail
  • 4: logging

This makes it hard to find a table if you only know the first letter of its name. Also - as this is a mssql database - we have to surround tablenames with square brackets everywhere.

-- doesn't work
select * from 0examples;

-- does work
select * from [0examples];

The first language I used professionally was 4D. It supported interprocess variables prefixed by a <>, process variables with no prefixes and local variables which started with a $. All those prefixes (or lack thereof) are used by the compiler/interpreter to determine the variable's scope.

The actual strange coding standard was some sort of hungarian notation. The catch was that instead of naming variables based on their types, they had to be prefixed according to their scope.

Variables, whose scope were determined by their prefix, had to be prefixed with redundant information!

I don't dare ask the guy responsible for the standards why it had to be this way...


I absolutely hate it when someone doesn't use a naming convention. At where I worked, the lead developer (who I replaced) couldn't figure out if he wanted to use camelCase, or way_over_used_underscores. Personally, I hate the underscores and the camel case is easier to read, but it doesn't really matter as long as you keep to one standard.

PHP is especially bad at this, take a look at mysql_numrows which merges the two without the caps.


Perhaps one of the more frustrating situations I've encountered was where people insisted on prefixing Stored Procedures with the prefix "sp_".

If you don't know why this is a bad thing to do, check out this blog entry here!

In a nutshell, if SQL Server is looking for a Stored Procedure with an sp_ prefix, it will check the master database first (which it won't find unless the SP is actually in the master database). Assuming it isn't in the master DB, SQL Server assumes the SP isn't in the cache and therefore recompiles it.

It may sound like a small thing, but it adds up in high volume or busy database server environments!


The Project i work for hard coding is a strict NO..So we were forced to hash define as below

#define 1 ONE


In Delphi we had to change from

if something then
begin
  ...
end
else
begin
 ...
end;

to

if something then begin
  ...
end else begin
 ...
end;

in a project with 1.5 million lines of code. Imagine how easy this was on source control, diff, and merge! It also led to forgetting begin and not noticing it right away when the compiler announced a superflous end.


I worked at a place that had a merger between 2 companies. The 'dominant' one had a major server written in K&R C (i.e. pre-ANSI). They forced the Java teams (from both offices -- probably 20 devs total) to use this format, which gleefully ignored the 2 pillars of the "brace debate" and goes straight to crazy:

if ( x == y ) 
    {
    System.out.println("this is painful");
    x = 0;
    y++;
    }

We have to put a comment above every sql statement. So, you may have an sql statement as such

Select USER_ID FROM USERS WHERE NAME = :NAME;

And you still have to have a comment above it that would say:

Select USER_ID from the USERS table, where name equals the name entered.

Now, when the actual comment is longer than the code, and the code is simple enough for a second grader to read, i really don't see the point of commenting... But, alas, I have had to go back and add comments to statements just like this.

This has been on a mainframe, coding in cobol. Team size is usually about 4 or 5, but this rule has bitten everyone here from time to time.


Not quite a coding standard, but in 1998 I worked for a company where C++ was banned, in favour of C. This was because OO was considered too complex for the software engineers to grasp.

In our C code we were required to prefix all semi-colons with a space

int someInt = 5 ;

I could never find out a reason for this, but after a while it did grow on me.


I worked at a place that had a merger between 2 companies. The 'dominant' one had a major server written in K&R C (i.e. pre-ANSI). They forced the Java teams (from both offices -- probably 20 devs total) to use this format, which gleefully ignored the 2 pillars of the "brace debate" and goes straight to crazy:

if ( x == y ) 
    {
    System.out.println("this is painful");
    x = 0;
    y++;
    }

Perhaps one of the more frustrating situations I've encountered was where people insisted on prefixing Stored Procedures with the prefix "sp_".

If you don't know why this is a bad thing to do, check out this blog entry here!

In a nutshell, if SQL Server is looking for a Stored Procedure with an sp_ prefix, it will check the master database first (which it won't find unless the SP is actually in the master database). Assuming it isn't in the master DB, SQL Server assumes the SP isn't in the cache and therefore recompiles it.

It may sound like a small thing, but it adds up in high volume or busy database server environments!


The strangest one i saw was database table naming where the tables were prefaced with a TLA for functional area, eg accounting ACC then a 3 digit number to (overide the default sort) and then the table name.

Plus this was extended into the column names as well.

ACC100_AccountCode

it was a nightmare to read a query, they were so unreadable.


Use _ or m_ in front of global variable when you can simply use the keyword this. when you need to access global variable...


Giving numbers to our tables, like tbl47_[some name]


Forbidden:

while (true) {

Allowed:

for (;;) {

inserting line breaks
(//--------------------------------------------------------------------------------)
between methods in a c# project.


Prefix tables with dbo_

Yes, as in dbo.dbo_tablename.


My coding standards gripes are pretty tame compared to some of the heinous stuff I've seen here, but here goes:

I was on a project where some of the developers insisted on the most peculiar form of indenting I've ever seen:

if (condition)
   {
      x++;
      printf("Hello condition!\n");
   }
else
   {
      y++;
   }

We were developing for an embedded environment with a really rotten debugger. In fact, printf(), hexdump() and the mapfile were the preferred method of debugging. This of course meant using static was forbidden and all global variables and functions had to be of the form modulename_variablename.

Checking in code with warnings was forbidden (not such a bad thing), but the compiler would warn about any conditional that was constant. Therefore, the old macro/statement trick of do { something(); } while(0) was forbidden.

Lastly, leaving a trailing comma on a enumerator list or initializer was considered lazy, and thus forbidden:

enum debuglevel
   {
      NONE,
      FATAL,
      WARNING,
      VERBOSE,   // Naughty, naughty!
   };

As I've said, rather tame. But as a follower of "The Ten Commandments for C Programmers", I found the unconventional bracing style absolutely maddening.


Half of the team favored four-space indentation; the other half favored two-space indentation.

As you can guess, the coding standard mandated three, so as to "offend all equally" (a direct quote).


a friend of mine - we'll call him CodeMonkey - got his first job out of college [many years ago] doing in-house development in COBOL. His first program was rejected as 'not complying with our standards' because it used... [shudder!] nested IF statements

the coding standards banned the use of nested IF statements

now, CodeMonkey was not shy and was certain of his abilities, so he persisted in asking everyone up the chain and down the aisle why this rule existed. Most claimed they did not know, some made up stuff about 'readability', and finally one person remembered the original reason: the first version of the COBOL compiler they used had a bug and didn't handle nested IF statements correctly.

This compiler bug, of course, had been fixed for at least a decade, but no one had challenged the standards. [baaa!]

CodeMonkey was successful in getting the standards changed - eventually!


In Java, I am currently discouraged to use boolean functions as a predicate in a test:

    if( list.isEmpty() )...

must be rewritten

    if( list.isEmpty() == true )...

and

    if( !list.isEmpty() )...

must be rewritten

    if( list.isEmpty() == false )...

because "it is clearer like that".

To me, "list.isEmpty() == true" has 2 verbs, "is" and "equals", in one phrase without a connective. I can't make it feel right.


The worst is a nameless place I still earn money from, there are no standards. Every program is new adventure.

Fortunately another contractor and I are slowly training the real employees and forcing some structure on the mess.


Half of the team favored four-space indentation; the other half favored two-space indentation.

As you can guess, the coding standard mandated three, so as to "offend all equally" (a direct quote).


having to put m_ prefix on java instance variables and g_ prefix on java static variables, most un-Java idiot cruft I have ever had to deal with, perpetuated by C and C++ developers that didn't know how to use anything other than notepad to develop Java with!

except that nobody actually followed this except to put m_ on everything even statics even method names ...


I once worked on a VB.NET project where every method body was wrapped in the following Try...Catch block:

Public Sub MyMethod()
    Try
        ' Whatever
    Catch Ex As Exception
        Throw New Exception("MyClass::MyMethod::" + Ex.ToString())
    End Try
End Sub

Those who do not understand Exception.StackTrace are doomed to reinvent it, badly.


We have to put a comment above every sql statement. So, you may have an sql statement as such

Select USER_ID FROM USERS WHERE NAME = :NAME;

And you still have to have a comment above it that would say:

Select USER_ID from the USERS table, where name equals the name entered.

Now, when the actual comment is longer than the code, and the code is simple enough for a second grader to read, i really don't see the point of commenting... But, alas, I have had to go back and add comments to statements just like this.

This has been on a mainframe, coding in cobol. Team size is usually about 4 or 5, but this rule has bitten everyone here from time to time.


I worked in a place where the coding standard was one giant WTF: strange Hungarian notation, prefixing globals with 'g' and members with 'm' (so there were gems like gsSomeVariable), adding 'ref string sError' to every single function, instead of throwing exceptions (which was a BIG nono!).

The killer, though, was prefixing the function parameters with I_ for input parameters, and O_ for output parameters.

I work now in a much better place :)


At a major UK bank I was brought in to act as a design authority on a new .NET system.

Their rules state that the database tables had to be a maximum of 8 characters long, with the project code (a 5 digit code) as the prefix.

They were enforcing old DB2 rules onto Windows projects sigh


My weirdest one was at a contract a couple years ago. @ZombieSheep's weird one was part of it, but not the weirdest one in that company.

No, the weirdest one in that company was the database naming scheme. Every table was named in all caps, with underscores between the words. Every table had a prefix (generally 1 - 6 characters) which was usually an acronym or an abbreviation of the main table name. Every field of the table was prefixed with the same prefix as well. So, let's say you have a simple schema where people can own cats or dogs. It'd look like this:

PER_PERSON
    PER_ID
    PER_NameFirst
    PER_NameLast
    ...
CAT_CAT
    CAT_ID
    CAT_Name
    CAT_Breed
    ...
DOG_DOG
    DOG_ID
    DOG_Name
    DOG_Breed
    ...
PERCD_PERSON_CAT_DOG (for the join data)
    PERCD_ID
    PERCD_PER_ID
    PERCD_CAT_ID
    PERCD_DOG_ID

That said, as weird as this felt initially ... It grew on me. The reasons behind it made sense (after you wrapped your brain around it), as the prefixes were there to be reminders of "recommended" (and enforced!) table aliases when building joins. The prefixing made the majority of join queries easier to write, as it was very rare that you'd have to explicitly reference a table before the field.

Heck, after a while, all of us on the team (6 people on our project) were able to begin referring to tables in conversation by nothing more than the prefix. An acquired taste, to be sure ... But one that grew on me. So much so that I still use it, when I have that freedom.


I've had a lot of stupid rules, but not a lot that I considered downright strange.

The sillyiest was on a NASA job I worked back in the early 90's. This was a huge job, with well over 100 developers on it. The experienced developers who wrote the coding standards decided that every source file should begin with a four letter acronym, and the first letter had to stand for the group that was responsible for the file. This was probably a great idea for the old FORTRAN 77 projects they were used to.

However, this was an Ada project, with a nice hierarchal library structure, so it made no sense at all. Every directory was full of files starting with the same letter, followed by 3 more nonsense leters, an underscore, and then part of the file name that mattered. All the Ada packages had to start with this same five-character wart. Ada "use" clauses were not allowed either (arguably a good thing under normal circumstances), so that meant any reference to any identifier that wasn't local to that source file also had to include this useless wart. There probably should have been an insurrection over this, but the entire project was staffed by junior programmers and fresh from college new hires (myself being the latter).

A typical assignment statement (already verbose in Ada) would end up looking something like this:

NABC_The_Package_Name.X := NABC_The_Package_Name.X + 
  CXYZ_Some_Other_Package_Name.Delta_X;

Fortunately they were at least enlightened enough to allow us more than 80 columns! Still, the facility wart was hated enough that it became boilerplate code at the top of everyone's source files to use Ada "renames" to get rid of the wart. There'd be one rename for each imported ("withed") package. Like this:

package Package_Name renames NABC_Package_Name;
package Some_Other_Package_Name renames CXYZ_Some_Other_Package_Name;
--// Repeated in this vein for an average of 10 lines or so

What the more creative among us took to doing was trying to use the wart to make an acutally sensible (or silly) package name. (I know what you are thinking, but explitives were not allowed and shame on you! That's disgusting). For example, I was in the Common code group, and I needed to make a package to interface with the Workstation group. After a brainstorming session with the Workstation guy, we decided to name our packages so that someone needing both would have to write:

with CANT_Interface_Package;
with WONT_Interface_Package;

Not being able to use Reflection as the manager claimed it involved too much 'magic'.


I worked in a VB .NET shop three years ago, where the "technical lead" decreed that all methods accepting a reference type parameter (i.e., an object) must use ByRef instead of ByVal. I found this especially odd because they'd asked me the ByVal/ByRef-what's-the-difference question in my interview, and I explained how it worked for value types and for reference types.

His explanation for the practice: "Some of the newer, less-experienced devs will get confused otherwise."

At the time, I was the most recently hired, and it was my first permanent .NET job. And I wasn't confused by it.


Back in the 80's/90's, I worked for an aircraft simulator company that used FORTRAN. Our FORTRAN compiler had a limit of 8 characters for variable names. The company's coding standards reserved the first three of them for Hungarian-notation style info. So we had to try and create meaningful variable names with just 5 characters!


The worst is a nameless place I still earn money from, there are no standards. Every program is new adventure.

Fortunately another contractor and I are slowly training the real employees and forcing some structure on the mess.


The very strangest one I had, and one which took me quite some time to overthrow, was when the owner of our company demanded that our new product be IE only. If it could work on FireFox, that was OK, but it had to be IE only.

This might not sound too strange, except for one little flaw. All of the software was for a bespoke server software package, running on Linux, and all client boxes that our customer was buying were Linux. Short of trying to figure out how to get Wine (in those days, very unreliable) up and running on all of these boxes and seeing if we could get IE running and training their admins how to debug Wine problems, it simply wasn't possible to meet the owner's request. The problem was that he was doing the Web design and simply didn't know how to make Web sites compliant with FireFox.

It probably won't shock you to know that that our company went bankrupt.


I worked in a VB .NET shop three years ago, where the "technical lead" decreed that all methods accepting a reference type parameter (i.e., an object) must use ByRef instead of ByVal. I found this especially odd because they'd asked me the ByVal/ByRef-what's-the-difference question in my interview, and I explained how it worked for value types and for reference types.

His explanation for the practice: "Some of the newer, less-experienced devs will get confused otherwise."

At the time, I was the most recently hired, and it was my first permanent .NET job. And I wasn't confused by it.


Hungarian notation in general.


I worked in a place where the coding standard was one giant WTF: strange Hungarian notation, prefixing globals with 'g' and members with 'm' (so there were gems like gsSomeVariable), adding 'ref string sError' to every single function, instead of throwing exceptions (which was a BIG nono!).

The killer, though, was prefixing the function parameters with I_ for input parameters, and O_ for output parameters.

I work now in a much better place :)


Wow -- this brings back so many memories of one particular place that I worked: Arizona Department of Transportation.

There was a project manager there that didn't understand object-based programming (and didn't want to understand it). She was convinced that object-based programming was a fad, and refused to let anybody check-in code that used any kind of object based programming.

(Seriously -- she actually spent a lot of her day reviewing code that we had checked-in to Visual SourceSafe just to make sure we weren't breaking the rules).

Considering Visual Basic 4 had just released (this was about 12 years ago), and considering that the Windows forms application we were building in VB4 used objects to describe the forms, this made development ... complicated.

A buddy of mine actually tried to get around this problem by encapsulating his 'object code' inside dummy 'forms' and she eventually caught on that he was just (* gasp *) hiding his objects!

Needless to say, I only lasted about 3 months there.

Gosh, I disliked that woman's thinking.


No ternary operator allowed where I currently work:

int value = (a < b) ? a : b;

... because not everyone "gets it". If you told me, "Don't use it because we've had to rewrite them when the structures get too complicated" (nested ternary operators, anyone?), then I'd understand. But when you tell me that some developers don't understand them... um... Sure.


No Hungarian whatsoever.

OK, you're thinking this is bad why? Well, because they considered this to be Hungarian:

int foo;
int *pFoo;
int **hFoo;

Now, any old-school Mac programmer will remember dealing with Handles and Ptrs. The above is the easiest way to tell them apart - Apple sample code is full of it, and Apple was hardly a hotbed of Hungarianism. And so when I had to write some old-school Mac code, naturally I did that, and got it shot down for being Hungarian.

But nobody could propose an alternate naming scheme that preserved the clarity of three variables referring to the same data in different ways, so I checked it in as-is.


Several WTF's in one VB6 shop (I'm not proud, I was hungry and needed to eat) back in 2002 - 2004.

The most annoying IMHO, was setting all object references to nothing at the end of the sub/function. This was to "help" the compiler reference count. It didn't matter how many tests I performed for the TA to prove it wasn't necessary, Oh no, it still had to be done, even though he had absoutely no evidence to back him up what so ever. Eventually I gave up and about a year later found an article explaining why it was pants. I bring this to the TA thinking "Got the fecker!". He goes "Yeah, I've known about that for years, but if you start changing the standard the sheep " meaning other developers, the people he worked with everyday "will screw it up". Gob sh1te.

Others in the same shop.

  • Never delete code, always comment it out (even though we were using source control).
  • Prefixes on table names that were meaningless when I got there, but had to be enforced on new tables.
  • Prefixing all objects with o_ (lo_ for procedure level references, mo_ for module, go_ for global). Absoutely pointless in a project where every other variable was an object reference.

Mostly I was writing c++ there (only c++ developer, so made own standards, and enforced with rigor!) with occasional vb, otherwise I wouldn't have lasted.


Maybe not the most outlandish one you'll get, but I really really hate when I have to preface database table names with 'tbl'


Not being allowed to use Pointers or GOTO! (In C, none-the-less!) Thankfully this was merely a "software engineering" class, which I was able to graduate and then enter the "real world".


I worked in a place where the coding standard was one giant WTF: strange Hungarian notation, prefixing globals with 'g' and members with 'm' (so there were gems like gsSomeVariable), adding 'ref string sError' to every single function, instead of throwing exceptions (which was a BIG nono!).

The killer, though, was prefixing the function parameters with I_ for input parameters, and O_ for output parameters.

I work now in a much better place :)


The creator of the file (doesn't have to put any code in) has to put their name in the file. So if you create stubs or placeholders, you "own" them forever.

The guy who actually writes the code doesn't add his name; we had source control so that we'd know, always who to blame.


Hungarian notation in general.


The first programming job I had was with a Microsoft QuickBASIC 4.5 shop. The lead developer had been working in BASIC just about forever, so most of the advanced (!) features of QuickBASIC were off-limits because they were new and he didn't understand them. So:

  • No Sub/End Sub procedures. Everything was done with GOSUB
  • We were allowed to not number lines that weren't the target of GOTO or GOSUB. But GOTO targets has to be a numeric label, not a name.
  • Targets of GOSUB were allowed to be named, but the name had to prefixed by 'S' and a four digit number. All subroutines had to have the four digit number sorted in order in the source file. So a typical routine might be S1135InitializePrinter. You'd have to go and find the right routine to get the number, there were enough that you couldn't hope to remember them all.
  • No block IF/END IF. All IFs had to have either a single GOTO or GOSUB as the conditional statement.

That was a really fun job. No, seriously.


To NEVER remove any code when making changes. We were told to comment all changes. Bear in mind we use source control. This policy didn't last long because developers were in an uproar about it and how it would make the code unreadable.


Using generic numbered identifier names

At my current work we have two rules which are really mean:

Rule 1: Every time we create a new field in a database table we have to add additional reserve fields for future use. These reserve fields are numbered (because no one knows which data they will hold some day) The next time we need a new field we first look for an unused reserve field.

So we end up with with customer.reserve_field_14 containing the e-mail address of the customer.

At one day our boss thought about introducing reserve tables, but fortunatly we could convince him not to do it.

Rule 2: One of our products is written in VB6 and VB6 has a limit of the total count of different identifier names and since the code is very large, we constantly run into this limit. As a "solution" all local variable names are numbered:

  • Lvarlong1
  • Lvarlong2
  • Lvarstr1
  • ...

Although that effectively circumvents the identifier limit, these two rules combined lead to beautiful code like this:

...

If Lvarbool1 Then
  Lvarbool2 = True
End If

If Lvarbool2 Or Lvarstr1 <> Lvarstr5 Then
  db.Execute("DELETE FROM customer WHERE " _ 
      & "reserve_field_12 = '" & Lvarstr1 & "'")
End If

...

You can imagine how hard it is to fix old or someone else's code...

Latest update: Now we are also using "reserve procedures" for private members:

Private Sub LSub1(Lvarlong1 As Long, Lvarstr1 As String)
  If Lvarlong1 >= 0 Then 
    Lvarbool1 = LFunc1(Lvarstr1)
  Else
    Lvarbool1 = LFunc6()
  End If
  If Lvarbool1 Then
    LSub4 Lvarstr1
  End If
End Sub

EDIT: It seems that this code pattern is becoming more and more popular. See this The Daily WTF post to learn more: Astigmatism :)


Back in the 80's/90's, I worked for an aircraft simulator company that used FORTRAN. Our FORTRAN compiler had a limit of 8 characters for variable names. The company's coding standards reserved the first three of them for Hungarian-notation style info. So we had to try and create meaningful variable names with just 5 characters!


I've been getting worked up over naming table columns after mysql keywords. It requires stupid column name escaping in every single query you write.

SELECT this, that, `key` FROM sometable WHERE such AND suchmore;

Just horrible.


Writing methods comments with pointless information for almost all methods.

Not allowing multiple exit points from a method.

Hungarian notation for all variables, enums, structures and even classes, e.g. iMyInt, tagMySturcture, eMyEnum and CMyClass.


Applying s_ to variables and methods which were deemed "safety critical" for software that was part of a control system. Couple this with the other rule about putting m_ on the front of member variables and you'd get something ridiculous like "s_m_blah()", which is darn annoying to write and not very readable in my opinion. In the end some 'safety expert' was supposed to gain insight by looking at the code and determining something from it by using those "s_" - in practice, they didn't know c++ too well so they couldn't do much other than make reports on the number of identifiers that we'd marked as 'safety critical'. Utter nonsense...


On one of my first jobs the boss said that we should always use fully qualified type names in C# and forbid usings, since we should always know which type we're using when declaring variable, parameter, etc.


Once worked on a project where underscores were banned. And I mean totally banned. So in a c# winforms app, whenever we added a new event handler (e.g. for a button) we'd have to rename the default method name from buttonName_Click() to something else, just to satisfy the ego of the guy that wrote the coding standards. To this day I don't know what he had against the humble underscore


The very strangest one I had, and one which took me quite some time to overthrow, was when the owner of our company demanded that our new product be IE only. If it could work on FireFox, that was OK, but it had to be IE only.

This might not sound too strange, except for one little flaw. All of the software was for a bespoke server software package, running on Linux, and all client boxes that our customer was buying were Linux. Short of trying to figure out how to get Wine (in those days, very unreliable) up and running on all of these boxes and seeing if we could get IE running and training their admins how to debug Wine problems, it simply wasn't possible to meet the owner's request. The problem was that he was doing the Web design and simply didn't know how to make Web sites compliant with FireFox.

It probably won't shock you to know that that our company went bankrupt.


All file names must be in lower case...


A buddy of mine encountered this rule while working at a government job. The use of ++ (pre or post) was completely banned. The reason: Different compilers might interpret it differently.


When I started working at one place, and started entering my code into the source control, my boss suddenly came up to me, and asked me to stop committing so much. He told me it is discouraged to do more than 1 commit per-day for a developer because it litters the source control. I simply gaped at him...

Later I understood that the reason he even came up to me about it is because the SVN server would send him (and 10 more high executives) a mail for each commit someone makes. And by littering the source control I guessed he ment his mailbox.


I was told that old code should be commented out rather than being removed; in case we needed to refer to the old code (yes, the code was in source control...). This doesn't seem that bad, until major changes are made. Then it becomes a nightmare, with entire sections deleted all over the code.


I had to spell and grammar check my comments. They had to be complete sentences, properly capitalized and finished with a period.


To NEVER remove any code when making changes. We were told to comment all changes. Bear in mind we use source control. This policy didn't last long because developers were in an uproar about it and how it would make the code unreadable.


Anything having to do with formatting (especially place of '{' and other block character) is always a pain to enforce.

Even with an automatic format at each source file checking, you can not be sure every developer will ever always use the same formatter, with the same formatting set of rules...

And then you have to merge those files back to trunk. And you commit suicide ;)


Prefix tables with dbo_

Yes, as in dbo.dbo_tablename.


At a former job:

  • "Normal" tables begin with T_
  • "System" tables (usually lookups) begin with TS_ (except when they don't because somebody didn't feel like it that day)
  • Cross-reference tables begin with TSX_
  • All field names begin with F_

Yes, that's right. All of the fields, in every single table. So that we can tell it's a field.



We have a no code past the 80th character column that is controversial in our C++ development team. Liked and code review enforced by some; Despised by others.

Also, we have a very controversial C++ throw(), throw(...) specification standard. Religiously used by some and demonized by others. Both camps cite discussions and experts to enforce their respective positions.


Back in my C++ days we were not allowed to use ==,>=, <=,&&, etc. there were macros for this ...

if (bob EQ 7 AND alice LEQ 10)
{
   // blah
}

this was obviously to deal with the "old accidental assignment in conditional bug", however we also had the rule "put constants before variables", so

if (NULL EQ ptr); //ok
if (ptr EQ NULL); //not ok

Just remembered, the simplest coding standard I ever heard was "Write code as if the next maintainer is a vicious psychopath who knows where you live."


A buddy of mine encountered this rule while working at a government job. The use of ++ (pre or post) was completely banned. The reason: Different compilers might interpret it differently.


When using SQL Server, which has such big limits on table name length that I've never personally bumped into them, we were forced to use the naming convention from the older mainframe system, even though the new system never interacted with the mainframe database.

Because of the tiny limit on the table names, the convention was to give all the tables codenames, rather than meaningful descriptions.

So, on a system that could quite happily have had the "customer" table called "ThisIsTheCustomerTable", instead it was called "TBRC03AA". And the next table was called "TBRC03AB", and the next one called "TBRC03AC", and so on.

That made the SQL really easy to understand, especially a month after you'd written it.


I completly disagree with this one, but I was forced to follow it:

"All HTML LINKS will ALWAYS be underlined."

A while back I explained why I disagree on my blog.

Note: Even Stackoverflow ONLY underlines links when you move the mouse over them.


Adding an 80 character comment at the end of each method so it is easy to find the end of the method. Like this:

void doSomething()
{
}
//----------------------------------------------------------------------------

The rationale being that:

  • some users don't use IDE's that have code folding (Ok I will give them that).
  • a space between methods is not clear since people may not follow the other coding standards about indenting and brace placement, hence it would be hard to find the end of a function. (Not releavent; if you need to add this because people don't follow your coding standard then why should they follow this one?)

I've had a lot of stupid rules, but not a lot that I considered downright strange.

The sillyiest was on a NASA job I worked back in the early 90's. This was a huge job, with well over 100 developers on it. The experienced developers who wrote the coding standards decided that every source file should begin with a four letter acronym, and the first letter had to stand for the group that was responsible for the file. This was probably a great idea for the old FORTRAN 77 projects they were used to.

However, this was an Ada project, with a nice hierarchal library structure, so it made no sense at all. Every directory was full of files starting with the same letter, followed by 3 more nonsense leters, an underscore, and then part of the file name that mattered. All the Ada packages had to start with this same five-character wart. Ada "use" clauses were not allowed either (arguably a good thing under normal circumstances), so that meant any reference to any identifier that wasn't local to that source file also had to include this useless wart. There probably should have been an insurrection over this, but the entire project was staffed by junior programmers and fresh from college new hires (myself being the latter).

A typical assignment statement (already verbose in Ada) would end up looking something like this:

NABC_The_Package_Name.X := NABC_The_Package_Name.X + 
  CXYZ_Some_Other_Package_Name.Delta_X;

Fortunately they were at least enlightened enough to allow us more than 80 columns! Still, the facility wart was hated enough that it became boilerplate code at the top of everyone's source files to use Ada "renames" to get rid of the wart. There'd be one rename for each imported ("withed") package. Like this:

package Package_Name renames NABC_Package_Name;
package Some_Other_Package_Name renames CXYZ_Some_Other_Package_Name;
--// Repeated in this vein for an average of 10 lines or so

What the more creative among us took to doing was trying to use the wart to make an acutally sensible (or silly) package name. (I know what you are thinking, but explitives were not allowed and shame on you! That's disgusting). For example, I was in the Common code group, and I needed to make a package to interface with the Workstation group. After a brainstorming session with the Workstation guy, we decided to name our packages so that someone needing both would have to write:

with CANT_Interface_Package;
with WONT_Interface_Package;

In C++, we had to write explicitly everything that the compiler is supposed to write for us (default constructor, destructor, copy constructor, copy assignment operator) for every class. Looks like whoever wrote the standards was not very confident on the language.


I absolutely hate it when someone doesn't use a naming convention. At where I worked, the lead developer (who I replaced) couldn't figure out if he wanted to use camelCase, or way_over_used_underscores. Personally, I hate the underscores and the camel case is easier to read, but it doesn't really matter as long as you keep to one standard.

PHP is especially bad at this, take a look at mysql_numrows which merges the two without the caps.


Back in my COBOL days, we had to use three asterisks for comments (COBOL requires only one asterisk in column 7). We even had a pre-compiler that checked for this, and wouldn't compile your program if you used anything but three asterisks.


As I always worked self-employed/freelancer/project leader, I never got into someone's standards, all standards are my decisions. But, I recently found a fun piece of "coding standards document" back when I was 15:

All functions must be named "ProjectName_FunctionName".

Well, procedural PHP, anyone? Those weren't times of hard PHP OOP yet, but still. If I wanted to use code from one project to another, I would have to rewrite all references, etc.

I could have used something like "package_FunctionName".


When I started working at one place, and started entering my code into the source control, my boss suddenly came up to me, and asked me to stop committing so much. He told me it is discouraged to do more than 1 commit per-day for a developer because it litters the source control. I simply gaped at him...

Later I understood that the reason he even came up to me about it is because the SVN server would send him (and 10 more high executives) a mail for each commit someone makes. And by littering the source control I guessed he ment his mailbox.


We were doing a C++ project and the team lead was a Pascal guy.

So we had a coding standard include file to redefine all that pesky C and C++ syntax:

#define BEGIN {
#define END }

but wait there's more!

#define ENDIF }
#define CASE switch

etc. It's hard to remember after all this time.

This took what would have been perfectly readable C++ code and made it illegible to anyone except the team lead.

We also had to use reverse Hungarian notation, i.e.

MyClass *class_pt  // pt = pointer to type

UINT32 maxHops_u   // u = uint32

although oddly I grew to like this.


I once worked under the tyranny of the Mighty VB King.

The VB King was the pure master of MS Excel and VBA, as well as databases (Hence his surname : He played with Excel while the developers worked with compilers, and challenging him on databases could have detrimental effects on your career...).

Of course, his immense skills gave him an unique vision of development problems and project management solutions: While not exactly coding standards in the strictest sense, the VB King regularly had new ideas about "coding standards" and "best practices" he tried (and oftentimes succeeded) to impose on us. For example:

  • All C/C++ arrays shall start at index 1, instead of 0. Indeed, the use of 0 as first index of an array is obsolete, and has been superseded by Visual Basic 6's insightful array index management.

  • All functions shall return an error code: There are no exceptions in VB6, so why would we need them at all? (i.e. in C++)

  • Since "All functions shall return an error code" is not practical for functions returning meaningful types, all functions shall have an error code as first [in/out] parameter.

  • All our code will check the error codes (this led to the worst case of VBScript if-indentation I ever saw in my career... Of course, as the "else" clauses were never handled, no error was actually found until too late).

  • Since we're working with C++/COM, starting this very day, we will code all our DOM utility functions in Visual Basic.

  • ASP 115 errors are evil. For this reason, we will use On Error Resume Next in our VBScript/ASP code to avoid them.

  • XSL-T is an object oriented language. Use inheritance to resolve your problems (dumb surprise almost broke my jaw open this one day).

  • Exceptions are not used, and thus should be removed. For this reason, we will uncheck the checkbox asking for destructor call in case of exception unwinding (it took days for an expert to find the cause of all those memory leaks, and he almost went berserk when he found out they had willingly ignored (and hidden) his technical note about checking the option again, sent handfuls of weeks before).

  • catch all exceptions in the COM interface of our COM modules, and dispose them silently (this way, instead of crashing, a module would only appear to be faster... Shiny!... As we used the über error handling described above, it even took us some time to understand what was really happening... You can't have both speed and correct results, can you?).

  • Starting today, our code base will split into four branches. We will manage their synchronization and integrate all bug corrections/evolutions by hand.

All but the C/C++ arrays, VB DOM utility functions and XSL-T as OOP language were implemented despite our protests. Of course, over the time, some were discovered, ahem, broken, and abandoned altogether.

Of course, the VB King credibility never suffered for that: Among the higher management, he remained a "top gun" technical expert...

This produced some amusing side effects, as you can see by following the link What is the best comment in source code you have ever encountered?


We had to sort all the functions in classes alphabetically, to make them "easier to find". Never mind the ide had a drop down. That was too many clicks.

(same tech lead wrote an app to remove all comments from our source code).


You must use only five letter table names and the last two character is reserved for IO.


All documents in my company are version-controlled. So far, so good.

But for EVERY single file, upon first committing to CVS, you must immediately add two tags to it: CRE (for CREation) and DEV001 (for 1st DEVelopment cycle). As if it being the first version of the file itself wasn't enough.

After that, the process gets a bit more reasonable, fortunately.


Perhaps one of the more frustrating situations I've encountered was where people insisted on prefixing Stored Procedures with the prefix "sp_".

If you don't know why this is a bad thing to do, check out this blog entry here!

In a nutshell, if SQL Server is looking for a Stored Procedure with an sp_ prefix, it will check the master database first (which it won't find unless the SP is actually in the master database). Assuming it isn't in the master DB, SQL Server assumes the SP isn't in the cache and therefore recompiles it.

It may sound like a small thing, but it adds up in high volume or busy database server environments!


No ternary operator allowed where I currently work:

int value = (a < b) ? a : b;

... because not everyone "gets it". If you told me, "Don't use it because we've had to rewrite them when the structures get too complicated" (nested ternary operators, anyone?), then I'd understand. But when you tell me that some developers don't understand them... um... Sure.


reverse indentation. For example:

    for(int i = 0; i < 10; i++)
        {
myFunc();
        }

and:

    if(something)
        {
// do A
        }
    else
        {
// do B
    }

a friend of mine - we'll call him CodeMonkey - got his first job out of college [many years ago] doing in-house development in COBOL. His first program was rejected as 'not complying with our standards' because it used... [shudder!] nested IF statements

the coding standards banned the use of nested IF statements

now, CodeMonkey was not shy and was certain of his abilities, so he persisted in asking everyone up the chain and down the aisle why this rule existed. Most claimed they did not know, some made up stuff about 'readability', and finally one person remembered the original reason: the first version of the COBOL compiler they used had a bug and didn't handle nested IF statements correctly.

This compiler bug, of course, had been fixed for at least a decade, but no one had challenged the standards. [baaa!]

CodeMonkey was successful in getting the standards changed - eventually!


Forbidden:

while (true) {

Allowed:

for (;;) {

I worked in a place where the coding standard was one giant WTF: strange Hungarian notation, prefixing globals with 'g' and members with 'm' (so there were gems like gsSomeVariable), adding 'ref string sError' to every single function, instead of throwing exceptions (which was a BIG nono!).

The killer, though, was prefixing the function parameters with I_ for input parameters, and O_ for output parameters.

I work now in a much better place :)


Use _ or m_ in front of global variable when you can simply use the keyword this. when you need to access global variable...


The very strangest one I had, and one which took me quite some time to overthrow, was when the owner of our company demanded that our new product be IE only. If it could work on FireFox, that was OK, but it had to be IE only.

This might not sound too strange, except for one little flaw. All of the software was for a bespoke server software package, running on Linux, and all client boxes that our customer was buying were Linux. Short of trying to figure out how to get Wine (in those days, very unreliable) up and running on all of these boxes and seeing if we could get IE running and training their admins how to debug Wine problems, it simply wasn't possible to meet the owner's request. The problem was that he was doing the Web design and simply didn't know how to make Web sites compliant with FireFox.

It probably won't shock you to know that that our company went bankrupt.


One that no one has mentioned is being forced to write unit tests for classes that are brainless getters and setters.


We're coding after MISRA standard. The ruleset has "MUST" and "CAN" parts, and we spent hours of discussing which rules we don't want to apply and why, when someday upper management said "We want to tell our customers we're 100% compliant. Tomorrow, we apply all."

Among the rules is one that says: No bit operations on signed data. Trying to find out what the rule is for, the explanation was presented: There is no guarantee about the bit representation of signed data. There is only 2s complement in the world, but the standard makes no guarantee!

Anyway, doesn't sound like a big thing - who wants to declare bitcoded variables as signed?

However, the holy rules checker interprets "integer promotion" as "promotion to signed" and the C standards guru says it has to be. And every bit operation does integer promotion. So instead of:

a &= ~(1 << i)

you have to write:

a = (unsigned int)(a & (unsigned int)~(unsigned int)(1 << i))

which is obviously much more readable and portable and all. Fortunately I found out that a shifted 1u stays unsigned. So you can reduce it to:

a = (unsigned int)(a & (unsigned int)~(1u << i))

Funnily, there is a rule that was not activated: Forbid using funny characters like '\' in #include. The DOS-corrupted folks won't believe that writing #include "bla/foo.h" does work even with every windows compiler and is much more portable.


At a former job:

  • "Normal" tables begin with T_
  • "System" tables (usually lookups) begin with TS_ (except when they don't because somebody didn't feel like it that day)
  • Cross-reference tables begin with TSX_
  • All field names begin with F_

Yes, that's right. All of the fields, in every single table. So that we can tell it's a field.


"The guys who wrote the compiler are probably a lot smarter than you so don't try something clever" is what one guide line document said (not quite literally).


The first language I used professionally was 4D. It supported interprocess variables prefixed by a <>, process variables with no prefixes and local variables which started with a $. All those prefixes (or lack thereof) are used by the compiler/interpreter to determine the variable's scope.

The actual strange coding standard was some sort of hungarian notation. The catch was that instead of naming variables based on their types, they had to be prefixed according to their scope.

Variables, whose scope were determined by their prefix, had to be prefixed with redundant information!

I don't dare ask the guy responsible for the standards why it had to be this way...


What drives me nuts is people suffixing the ID field of a table with the name of the table. What the hell is wrong with just ID? You're going to have to alias it anyway... for the love of all that is sacred!

Imagine what your SQL statements look like when you've got id fields called IDSEWEBLASTCUSTOMERACTION and IDSEEVENTLOGGER.


Being forced to have only 1 return statement at the end of a method and making the code fall down to that.

Also not being able to re-use case statements in a switch and let it drop through; I had to write a convoluted script that did a sort of loop of the switch to handle both cases in the right order.

Lastly, when I started using C, I found it very odd to declare my variables at the top of a method and absolutely hated it. I'd spent a good couple of years in C++ and just declared them wherever I wanted; Unless for optimisation reasons I now declare all method variables at the top of a method with details of what they all do - makes maintenance A LOT easier.


We had to sort all the functions in classes alphabetically, to make them "easier to find". Never mind the ide had a drop down. That was too many clicks.

(same tech lead wrote an app to remove all comments from our source code).


reverse indentation. For example:

    for(int i = 0; i < 10; i++)
        {
myFunc();
        }

and:

    if(something)
        {
// do A
        }
    else
        {
// do B
    }

Every beginning and ending brace was required to have a comment:

public void HelloWorld(string name)
{

  if(name == "Joe")
  {
    Console.WriteLine("Hey, Joe!");
  } //if(name == "Joe")
  else
  {
    Console.WriteLine("Hello, " + name);
  } //if(name == "Joe")
} //public void HelloWorld(string name)

That's what led me to write my first Visual Studio plugin to automate that.


I once worked under the tyranny of the Mighty VB King.

The VB King was the pure master of MS Excel and VBA, as well as databases (Hence his surname : He played with Excel while the developers worked with compilers, and challenging him on databases could have detrimental effects on your career...).

Of course, his immense skills gave him an unique vision of development problems and project management solutions: While not exactly coding standards in the strictest sense, the VB King regularly had new ideas about "coding standards" and "best practices" he tried (and oftentimes succeeded) to impose on us. For example:

  • All C/C++ arrays shall start at index 1, instead of 0. Indeed, the use of 0 as first index of an array is obsolete, and has been superseded by Visual Basic 6's insightful array index management.

  • All functions shall return an error code: There are no exceptions in VB6, so why would we need them at all? (i.e. in C++)

  • Since "All functions shall return an error code" is not practical for functions returning meaningful types, all functions shall have an error code as first [in/out] parameter.

  • All our code will check the error codes (this led to the worst case of VBScript if-indentation I ever saw in my career... Of course, as the "else" clauses were never handled, no error was actually found until too late).

  • Since we're working with C++/COM, starting this very day, we will code all our DOM utility functions in Visual Basic.

  • ASP 115 errors are evil. For this reason, we will use On Error Resume Next in our VBScript/ASP code to avoid them.

  • XSL-T is an object oriented language. Use inheritance to resolve your problems (dumb surprise almost broke my jaw open this one day).

  • Exceptions are not used, and thus should be removed. For this reason, we will uncheck the checkbox asking for destructor call in case of exception unwinding (it took days for an expert to find the cause of all those memory leaks, and he almost went berserk when he found out they had willingly ignored (and hidden) his technical note about checking the option again, sent handfuls of weeks before).

  • catch all exceptions in the COM interface of our COM modules, and dispose them silently (this way, instead of crashing, a module would only appear to be faster... Shiny!... As we used the über error handling described above, it even took us some time to understand what was really happening... You can't have both speed and correct results, can you?).

  • Starting today, our code base will split into four branches. We will manage their synchronization and integrate all bug corrections/evolutions by hand.

All but the C/C++ arrays, VB DOM utility functions and XSL-T as OOP language were implemented despite our protests. Of course, over the time, some were discovered, ahem, broken, and abandoned altogether.

Of course, the VB King credibility never suffered for that: Among the higher management, he remained a "top gun" technical expert...

This produced some amusing side effects, as you can see by following the link What is the best comment in source code you have ever encountered?


We're coding after MISRA standard. The ruleset has "MUST" and "CAN" parts, and we spent hours of discussing which rules we don't want to apply and why, when someday upper management said "We want to tell our customers we're 100% compliant. Tomorrow, we apply all."

Among the rules is one that says: No bit operations on signed data. Trying to find out what the rule is for, the explanation was presented: There is no guarantee about the bit representation of signed data. There is only 2s complement in the world, but the standard makes no guarantee!

Anyway, doesn't sound like a big thing - who wants to declare bitcoded variables as signed?

However, the holy rules checker interprets "integer promotion" as "promotion to signed" and the C standards guru says it has to be. And every bit operation does integer promotion. So instead of:

a &= ~(1 << i)

you have to write:

a = (unsigned int)(a & (unsigned int)~(unsigned int)(1 << i))

which is obviously much more readable and portable and all. Fortunately I found out that a shifted 1u stays unsigned. So you can reduce it to:

a = (unsigned int)(a & (unsigned int)~(1u << i))

Funnily, there is a rule that was not activated: Forbid using funny characters like '\' in #include. The DOS-corrupted folks won't believe that writing #include "bla/foo.h" does work even with every windows compiler and is much more portable.


Not being able to use Reflection as the manager claimed it involved too much 'magic'.


In Java, when contracting somewhere that shall remain nameless, Interfaces were banned. The logic? The guy in charge couldn't find implementing classes with Eclipse...

Also banned - anonymous inner classes, on the grounds that the guy in charge didn't know what they were. Which made implementing a Swing GUI all kinds of fun.


I completly disagree with this one, but I was forced to follow it:

"All HTML LINKS will ALWAYS be underlined."

A while back I explained why I disagree on my blog.

Note: Even Stackoverflow ONLY underlines links when you move the mouse over them.


In Delphi we had to change from

if something then
begin
  ...
end
else
begin
 ...
end;

to

if something then begin
  ...
end else begin
 ...
end;

in a project with 1.5 million lines of code. Imagine how easy this was on source control, diff, and merge! It also led to forgetting begin and not noticing it right away when the compiler announced a superflous end.


In a large group at my company, we use C++ almost exclusively. Passing by non-const reference is forbidden.

If you want to modify a parameter to a function, you must pass it by pointer.

We have an internal flame war over the pros (easier to identify function calls that can modify variables) and cons (ridiculousness; having to deal with possible NULL pointers when you want a parameter to be required) about once a year.


Being forced to have only 1 return statement at the end of a method and making the code fall down to that.

Also not being able to re-use case statements in a switch and let it drop through; I had to write a convoluted script that did a sort of loop of the switch to handle both cases in the right order.

Lastly, when I started using C, I found it very odd to declare my variables at the top of a method and absolutely hated it. I'd spent a good couple of years in C++ and just declared them wherever I wanted; Unless for optimisation reasons I now declare all method variables at the top of a method with details of what they all do - makes maintenance A LOT easier.


To NEVER remove any code when making changes. We were told to comment all changes. Bear in mind we use source control. This policy didn't last long because developers were in an uproar about it and how it would make the code unreadable.


My weirdest one was at a contract a couple years ago. @ZombieSheep's weird one was part of it, but not the weirdest one in that company.

No, the weirdest one in that company was the database naming scheme. Every table was named in all caps, with underscores between the words. Every table had a prefix (generally 1 - 6 characters) which was usually an acronym or an abbreviation of the main table name. Every field of the table was prefixed with the same prefix as well. So, let's say you have a simple schema where people can own cats or dogs. It'd look like this:

PER_PERSON
    PER_ID
    PER_NameFirst
    PER_NameLast
    ...
CAT_CAT
    CAT_ID
    CAT_Name
    CAT_Breed
    ...
DOG_DOG
    DOG_ID
    DOG_Name
    DOG_Breed
    ...
PERCD_PERSON_CAT_DOG (for the join data)
    PERCD_ID
    PERCD_PER_ID
    PERCD_CAT_ID
    PERCD_DOG_ID

That said, as weird as this felt initially ... It grew on me. The reasons behind it made sense (after you wrapped your brain around it), as the prefixes were there to be reminders of "recommended" (and enforced!) table aliases when building joins. The prefixing made the majority of join queries easier to write, as it was very rare that you'd have to explicitly reference a table before the field.

Heck, after a while, all of us on the team (6 people on our project) were able to begin referring to tables in conversation by nothing more than the prefix. An acquired taste, to be sure ... But one that grew on me. So much so that I still use it, when I have that freedom.


If I remember correctly the delphi IDE did a default indent of two spaces. Most of the legacy code for the company had three spaces and was written by the VP IT and the CEO. One day, all the programmers were talking about what we should do to make our lives easier and a contractor who knew Delphi pretty well said, "Hey the ide defaults to two spaces does anyone have a problem with us doing this going forward for new code?" All of us looked at each other, and pretty much thought it was a no brainer and said that we agreed.

Two days later the VP and CEO found out we were going to make such a dangerous change that could "cause problems" and instructed us that we would be using three indents for everything until the two of them could accurately evaluate the impact of such a change. Now I am all for following standards, but these are the same people who thought oo programming was creating an object with one function that had all of the logic necessary to perform an action, and that source control was moving the code files to a different directory.


Half of the team favored four-space indentation; the other half favored two-space indentation.

As you can guess, the coding standard mandated three, so as to "offend all equally" (a direct quote).


In Delphi we had to change from

if something then
begin
  ...
end
else
begin
 ...
end;

to

if something then begin
  ...
end else begin
 ...
end;

in a project with 1.5 million lines of code. Imagine how easy this was on source control, diff, and merge! It also led to forgetting begin and not noticing it right away when the compiler announced a superflous end.


"The guys who wrote the compiler are probably a lot smarter than you so don't try something clever" is what one guide line document said (not quite literally).


I was told that old code should be commented out rather than being removed; in case we needed to refer to the old code (yes, the code was in source control...). This doesn't seem that bad, until major changes are made. Then it becomes a nightmare, with entire sections deleted all over the code.


reverse indentation. For example:

    for(int i = 0; i < 10; i++)
        {
myFunc();
        }

and:

    if(something)
        {
// do A
        }
    else
        {
// do B
    }

I worked in a VB .NET shop three years ago, where the "technical lead" decreed that all methods accepting a reference type parameter (i.e., an object) must use ByRef instead of ByVal. I found this especially odd because they'd asked me the ByVal/ByRef-what's-the-difference question in my interview, and I explained how it worked for value types and for reference types.

His explanation for the practice: "Some of the newer, less-experienced devs will get confused otherwise."

At the time, I was the most recently hired, and it was my first permanent .NET job. And I wasn't confused by it.


Adding an 80 character comment at the end of each method so it is easy to find the end of the method. Like this:

void doSomething()
{
}
//----------------------------------------------------------------------------

The rationale being that:

  • some users don't use IDE's that have code folding (Ok I will give them that).
  • a space between methods is not clear since people may not follow the other coding standards about indenting and brace placement, hence it would be hard to find the end of a function. (Not releavent; if you need to add this because people don't follow your coding standard then why should they follow this one?)

Although this wasn't at a job, we had a massive project for a class in college. One of the requirements was commenting every line of code in our application -- regardless of what it did... and each line had to be specific e.g.

int x=0; //declare variable x and assign it to 0

We weren't allowed to do this:

int x, y, z = 0; //declare and assign to 0

As it wasn't detailed enough. And that's not even following the naming conventions forced upon us.

Needless to say we spent a few hours going back through the code...


Writing methods comments with pointless information for almost all methods.

Not allowing multiple exit points from a method.

Hungarian notation for all variables, enums, structures and even classes, e.g. iMyInt, tagMySturcture, eMyEnum and CMyClass.


Anything having to do with formatting (especially place of '{' and other block character) is always a pain to enforce.

Even with an automatic format at each source file checking, you can not be sure every developer will ever always use the same formatter, with the same formatting set of rules...

And then you have to merge those files back to trunk. And you commit suicide ;)


In my last job, my supervisor always enforced Murphy's Law:

"Anything that can go wrong will go wrong."

I guess it was so we didn't slack off doing some quick fixes in the code or something like that. And now I constantly have that phrase in my head.


You must use only five letter table names and the last two character is reserved for IO.


The team size was about a dozen. For C# methods we had to put a huge XML formatted function before every function. I don't remember the format exactly but it involved XML tags nested about three to five levels deep. Here's a sketch from memory of the comment.

/// <comment>
/// </comment>
/// <table>
///    <thead>
///       <tcolumns>
///          <column>Date</column>
///          <column>Modified By</column>
///          <column>Comment</column>
///       </tcolumns>
///    </thead>
///    <rows>
///       <row>
///          <column>10/10/2006</column>
///          <column>Fred</column>
///          <column>Created function</column>
///       </row>
///    </rows>
/// <parameters>

I've got to stop there....

The downsides were many.

  • Files were made up mostly of comments.
  • We were not using our version control system for tracking changes to files.
  • Writing many small functions hurt readability.
  • Lots of scrolling.
  • Some people did not update the comments.

I used a code snippet (Emacs YAS) to add this code to my methods.


We have a no code past the 80th character column that is controversial in our C++ development team. Liked and code review enforced by some; Despised by others.

Also, we have a very controversial C++ throw(), throw(...) specification standard. Religiously used by some and demonized by others. Both camps cite discussions and experts to enforce their respective positions.


We were doing a C++ project and the team lead was a Pascal guy.

So we had a coding standard include file to redefine all that pesky C and C++ syntax:

#define BEGIN {
#define END }

but wait there's more!

#define ENDIF }
#define CASE switch

etc. It's hard to remember after all this time.

This took what would have been perfectly readable C++ code and made it illegible to anyone except the team lead.

We also had to use reverse Hungarian notation, i.e.

MyClass *class_pt  // pt = pointer to type

UINT32 maxHops_u   // u = uint32

although oddly I grew to like this.


Use _ or m_ in front of global variable when you can simply use the keyword this. when you need to access global variable...


At the place I'm currently working, the official coding standard stipulates a maximum line length of eighty characters. The rational was to enable hard-copies of the code to be formatted. Needless to say, this led to very odd code layout. I've worked to eliminate this standard, mainly through the argument of 'when was the last time you made a hard-copy of code?' Readability now versus chance of making a hard-copy on an eighty column DMP?

Skizz


I once had to spell out all acronyms, even industry standard ones such as OpenGL. Variable names such as glu were not good, but we had to use graphicsLibraryUtility.


One that no one has mentioned is being forced to write unit tests for classes that are brainless getters and setters.


Anything having to do with formatting (especially place of '{' and other block character) is always a pain to enforce.

Even with an automatic format at each source file checking, you can not be sure every developer will ever always use the same formatter, with the same formatting set of rules...

And then you have to merge those files back to trunk. And you commit suicide ;)


In my last job, my supervisor always enforced Murphy's Law:

"Anything that can go wrong will go wrong."

I guess it was so we didn't slack off doing some quick fixes in the code or something like that. And now I constantly have that phrase in my head.


While coding for a VB project I was asked to add the following comment section for each of the methods

'Module Name
'Module Description
'Parameters and description of each parameter
'Called by
'Calls

While I found the rest quite alright but I was against the last two, the reason I argued was the as the project becomes large it will become difficult to maintain. If we are creating the library function then we can never be able to maintain Called by. We were small team of 6, so the argument made by manager was that since you are going to call the functions this should be maintained. Anyway I had to give up this argument as the manager was adamant. The result was as expected, as the project become larger no one cared to maintain Called by and Calls.


The strangest was that type qualified variable naming must be used in Java, and the types where those of the columns from the database. So a java.sql.ResultSet had to be called tblClient etc.


No ternary operator allowed where I currently work:

int value = (a < b) ? a : b;

... because not everyone "gets it". If you told me, "Don't use it because we've had to rewrite them when the structures get too complicated" (nested ternary operators, anyone?), then I'd understand. But when you tell me that some developers don't understand them... um... Sure.


Giving numbers to our tables, like tbl47_[some name]


On one of my first jobs the boss said that we should always use fully qualified type names in C# and forbid usings, since we should always know which type we're using when declaring variable, parameter, etc.


I ran into two rules that I really hated on a C job a few years ago:

  1. "One module per file," where "module" was defined as a C function.

  2. Function-local variables allowed only at the top of the function, so this sort of thing was illegal:

if (test)
{
   int i;
   ...
}

Doing all database queries via stored procedures in Sql Server 2000. From complex multi-table queries to simple ones like:

select id, name from people

The arguments in favor of procedures were:

  • Performance
  • Security
  • Maintainability

I know that the procedure topic is quite controversial, so feel free to score my answer negatively ;)


a friend of mine - we'll call him CodeMonkey - got his first job out of college [many years ago] doing in-house development in COBOL. His first program was rejected as 'not complying with our standards' because it used... [shudder!] nested IF statements

the coding standards banned the use of nested IF statements

now, CodeMonkey was not shy and was certain of his abilities, so he persisted in asking everyone up the chain and down the aisle why this rule existed. Most claimed they did not know, some made up stuff about 'readability', and finally one person remembered the original reason: the first version of the COBOL compiler they used had a bug and didn't handle nested IF statements correctly.

This compiler bug, of course, had been fixed for at least a decade, but no one had challenged the standards. [baaa!]

CodeMonkey was successful in getting the standards changed - eventually!


The strangest was that type qualified variable naming must be used in Java, and the types where those of the columns from the database. So a java.sql.ResultSet had to be called tblClient etc.


Only one variable can be declared per logical line. [Rationale: Multiple declarations per line results in an inaccurate line-of-code count.]


A buddy of mine encountered this rule while working at a government job. The use of ++ (pre or post) was completely banned. The reason: Different compilers might interpret it differently.


Several WTF's in one VB6 shop (I'm not proud, I was hungry and needed to eat) back in 2002 - 2004.

The most annoying IMHO, was setting all object references to nothing at the end of the sub/function. This was to "help" the compiler reference count. It didn't matter how many tests I performed for the TA to prove it wasn't necessary, Oh no, it still had to be done, even though he had absoutely no evidence to back him up what so ever. Eventually I gave up and about a year later found an article explaining why it was pants. I bring this to the TA thinking "Got the fecker!". He goes "Yeah, I've known about that for years, but if you start changing the standard the sheep " meaning other developers, the people he worked with everyday "will screw it up". Gob sh1te.

Others in the same shop.

  • Never delete code, always comment it out (even though we were using source control).
  • Prefixes on table names that were meaningless when I got there, but had to be enforced on new tables.
  • Prefixing all objects with o_ (lo_ for procedure level references, mo_ for module, go_ for global). Absoutely pointless in a project where every other variable was an object reference.

Mostly I was writing c++ there (only c++ developer, so made own standards, and enforced with rigor!) with occasional vb, otherwise I wouldn't have lasted.


In Java, I am currently discouraged to use boolean functions as a predicate in a test:

    if( list.isEmpty() )...

must be rewritten

    if( list.isEmpty() == true )...

and

    if( !list.isEmpty() )...

must be rewritten

    if( list.isEmpty() == false )...

because "it is clearer like that".

To me, "list.isEmpty() == true" has 2 verbs, "is" and "equals", in one phrase without a connective. I can't make it feel right.


The last place I worked was primarily a C++ shop, and before I was hired my boss (who was the director of research and development) had issued a decree that "dynamic memory allocation is not allowed". No "new", not even a "malloc" -- because "those can lead to memory leaks if a developer forgets the corresponding delete/free operation". As a corollary to this particular rule, "pointers are also not allowed" (although references were totally acceptable, being both awesome and safe).

I repealed those rules (as opposed to, say, rewriting all our software in other languages) but I did have to add a few awesome rules of my own, like "you may not launch a new thread without written approval from someone qualified to do that sort of thing" based on an unfortunate series of code reviews (sigh).


I am not allowed to use this-> to reference local variables in our c++ code...


Almost any kind of hungarian notation.

The problem with hungarian notation is that it is very often misunderstood. The original idea was to prefix the variable so that the meaning was clear. For example:

int appCount = 0; // Number of apples.
int pearCount = 0; // Number of pears.

But most people use it to determine the type.

int iAppleCount = 0; // Number of apples.
int iPearCount = 0;  // Number of pears.

This is confusing, because although both numbers are integers, everybody knows, you can't compare apples with pears.



(Probably only funny in the uk)

An insurer I worked at wanted a combination "P" or "L" to denote the scope, concatenated with hungarian for the type, on all properties.

The plus point was we had a property called pintMaster! Made us all fancy a drink.


The first language I used professionally was 4D. It supported interprocess variables prefixed by a <>, process variables with no prefixes and local variables which started with a $. All those prefixes (or lack thereof) are used by the compiler/interpreter to determine the variable's scope.

The actual strange coding standard was some sort of hungarian notation. The catch was that instead of naming variables based on their types, they had to be prefixed according to their scope.

Variables, whose scope were determined by their prefix, had to be prefixed with redundant information!

I don't dare ask the guy responsible for the standards why it had to be this way...


Not being allowed to use Pointers or GOTO! (In C, none-the-less!) Thankfully this was merely a "software engineering" class, which I was able to graduate and then enter the "real world".


All file names must be in lower case...


In Delphi we had to change from

if something then
begin
  ...
end
else
begin
 ...
end;

to

if something then begin
  ...
end else begin
 ...
end;

in a project with 1.5 million lines of code. Imagine how easy this was on source control, diff, and merge! It also led to forgetting begin and not noticing it right away when the compiler announced a superflous end.


Postfixing _ to member variables. e.g.

int numberofCycles_;

This was in C++ on an open source project with a couple of developers. The main side effect was not knowing that a variable had class scope until getting to the end of the name. Not something I had thought much about before, but clearly backwards.


Capitalizing Acronyms

DO capitalize both characters of two-character acronyms except the first word of a camel-cased identifier.

System.IO
public void StartIO(Stream ioStream)

DO capitalize only the first character of acronyms with three or more characters except the first word of a camel-cased identifier.

System.Xml
public void ProcessHtmlTag(string htmlTag)

DO NOT capitalize any of the characters of any acronyms, whatever their length, at the beginning of a camel-cased identifier.


Strangest was "this must be coded in C++". Presumably I'm being hired for my expertise. If my expert opinion says another language would do the job better, then that other language should be the one used. Telling me which tool I should use is about the same as telling an automobile mechanic that he's only allowed to use metric wrenches. And only wrenches.


My coding standards gripes are pretty tame compared to some of the heinous stuff I've seen here, but here goes:

I was on a project where some of the developers insisted on the most peculiar form of indenting I've ever seen:

if (condition)
   {
      x++;
      printf("Hello condition!\n");
   }
else
   {
      y++;
   }

We were developing for an embedded environment with a really rotten debugger. In fact, printf(), hexdump() and the mapfile were the preferred method of debugging. This of course meant using static was forbidden and all global variables and functions had to be of the form modulename_variablename.

Checking in code with warnings was forbidden (not such a bad thing), but the compiler would warn about any conditional that was constant. Therefore, the old macro/statement trick of do { something(); } while(0) was forbidden.

Lastly, leaving a trailing comma on a enumerator list or initializer was considered lazy, and thus forbidden:

enum debuglevel
   {
      NONE,
      FATAL,
      WARNING,
      VERBOSE,   // Naughty, naughty!
   };

As I've said, rather tame. But as a follower of "The Ten Commandments for C Programmers", I found the unconventional bracing style absolutely maddening.


The last place I worked was primarily a C++ shop, and before I was hired my boss (who was the director of research and development) had issued a decree that "dynamic memory allocation is not allowed". No "new", not even a "malloc" -- because "those can lead to memory leaks if a developer forgets the corresponding delete/free operation". As a corollary to this particular rule, "pointers are also not allowed" (although references were totally acceptable, being both awesome and safe).

I repealed those rules (as opposed to, say, rewriting all our software in other languages) but I did have to add a few awesome rules of my own, like "you may not launch a new thread without written approval from someone qualified to do that sort of thing" based on an unfortunate series of code reviews (sigh).


The worst I've experienced was to do with code inspections. For some reason even though we had and used the diff tool of our vcs to see what had changed, when you wanted your code inspected you had to surround your changes in a file/function with some comment blocks like so:

/*********...80charswide...***
 * START INSPECT
 */

 some changed code...

 /*
  * END INSPECT
  *********...80charswide...****/

After the inspection you'd have to go back and remove all those comment blocks before committing. ugh.


The strangest one i saw was database table naming where the tables were prefaced with a TLA for functional area, eg accounting ACC then a 3 digit number to (overide the default sort) and then the table name.

Plus this was extended into the column names as well.

ACC100_AccountCode

it was a nightmare to read a query, they were so unreadable.


I was told that old code should be commented out rather than being removed; in case we needed to refer to the old code (yes, the code was in source control...). This doesn't seem that bad, until major changes are made. Then it becomes a nightmare, with entire sections deleted all over the code.


Postfixing _ to member variables. e.g.

int numberofCycles_;

This was in C++ on an open source project with a couple of developers. The main side effect was not knowing that a variable had class scope until getting to the end of the name. Not something I had thought much about before, but clearly backwards.


inserting line breaks
(//--------------------------------------------------------------------------------)
between methods in a c# project.


In C++, we had to write explicitly everything that the compiler is supposed to write for us (default constructor, destructor, copy constructor, copy assignment operator) for every class. Looks like whoever wrote the standards was not very confident on the language.


The worst coding standard I've ever had to live with was insane indentation.

The code had originally been written on a mainframe using 60x80 character green-screen terminals (this was quite a long time ago). The default tab size on these things was 8 characters, but the programmers at the time decided that was too big - the screen itself only showed 80 characters across, so an 8-character tab wasted a lot of space.

So they decided to set the intent size for their code to 4 characters.

All fair enough, you say. Except that they didn't do it by changing the tab size. They did it by making the first indentation to be 4 spaces, the second one to be a single tab character, and so on alternating between adding 4 spaces and a tab character.

While they stuck to the green screen terminals, this was fine. Weird, but fine.

The real chaos began when the development team got their shiny new Windows PCs.

The PC editor they chose had its tab size set to 4 characters, and so when the code was loaded, the indentation was simply all over the place.

We couldn't fix the indentation because some devs were still using the green screens, so for the year or so that it took to get the entire team transitioned to PCs, we had an absolute nightmare trying to work with code that was virtually unreadable in either one environment or the other (or more frequently, both).


In a large group at my company, we use C++ almost exclusively. Passing by non-const reference is forbidden.

If you want to modify a parameter to a function, you must pass it by pointer.

We have an internal flame war over the pros (easier to identify function calls that can modify variables) and cons (ridiculousness; having to deal with possible NULL pointers when you want a parameter to be required) about once a year.


What drives me nuts is people suffixing the ID field of a table with the name of the table. What the hell is wrong with just ID? You're going to have to alias it anyway... for the love of all that is sacred!

Imagine what your SQL statements look like when you've got id fields called IDSEWEBLASTCUSTOMERACTION and IDSEEVENTLOGGER.


Applying s_ to variables and methods which were deemed "safety critical" for software that was part of a control system. Couple this with the other rule about putting m_ on the front of member variables and you'd get something ridiculous like "s_m_blah()", which is darn annoying to write and not very readable in my opinion. In the end some 'safety expert' was supposed to gain insight by looking at the code and determining something from it by using those "s_" - in practice, they didn't know c++ too well so they couldn't do much other than make reports on the number of identifiers that we'd marked as 'safety critical'. Utter nonsense...


You must use only five letter table names and the last two character is reserved for IO.


The strangest one i saw was database table naming where the tables were prefaced with a TLA for functional area, eg accounting ACC then a 3 digit number to (overide the default sort) and then the table name.

Plus this was extended into the column names as well.

ACC100_AccountCode

it was a nightmare to read a query, they were so unreadable.


a friend of mine - we'll call him CodeMonkey - got his first job out of college [many years ago] doing in-house development in COBOL. His first program was rejected as 'not complying with our standards' because it used... [shudder!] nested IF statements

the coding standards banned the use of nested IF statements

now, CodeMonkey was not shy and was certain of his abilities, so he persisted in asking everyone up the chain and down the aisle why this rule existed. Most claimed they did not know, some made up stuff about 'readability', and finally one person remembered the original reason: the first version of the COBOL compiler they used had a bug and didn't handle nested IF statements correctly.

This compiler bug, of course, had been fixed for at least a decade, but no one had challenged the standards. [baaa!]

CodeMonkey was successful in getting the standards changed - eventually!


This isn't a coding standard issue, but is surely a tale of restrictive thinking. We had completed a short 4 week project in no less than 7 weeks. The schedule was loosely based on guestimating a feature list. The development process consisted of coding furiously. During the postmortem I suggested using milestones and breaking feature requests into tasks. Incredibly, my director dismissed my ideas, saying that because it was such a short project, we didn't need to use milestones or tasks, and asked for other suggestions. The room fell silent.

Language: Java, C++, HTML Team size: Two teams, totaling 10 engineers Which ill effects it caused you and your team: I felt like I was caught in a Dilbert cartoon.


Back in my COBOL days, we had to use three asterisks for comments (COBOL requires only one asterisk in column 7). We even had a pre-compiler that checked for this, and wouldn't compile your program if you used anything but three asterisks.


As I always worked self-employed/freelancer/project leader, I never got into someone's standards, all standards are my decisions. But, I recently found a fun piece of "coding standards document" back when I was 15:

All functions must be named "ProjectName_FunctionName".

Well, procedural PHP, anyone? Those weren't times of hard PHP OOP yet, but still. If I wanted to use code from one project to another, I would have to rewrite all references, etc.

I could have used something like "package_FunctionName".


Using generic numbered identifier names

At my current work we have two rules which are really mean:

Rule 1: Every time we create a new field in a database table we have to add additional reserve fields for future use. These reserve fields are numbered (because no one knows which data they will hold some day) The next time we need a new field we first look for an unused reserve field.

So we end up with with customer.reserve_field_14 containing the e-mail address of the customer.

At one day our boss thought about introducing reserve tables, but fortunatly we could convince him not to do it.

Rule 2: One of our products is written in VB6 and VB6 has a limit of the total count of different identifier names and since the code is very large, we constantly run into this limit. As a "solution" all local variable names are numbered:

  • Lvarlong1
  • Lvarlong2
  • Lvarstr1
  • ...

Although that effectively circumvents the identifier limit, these two rules combined lead to beautiful code like this:

...

If Lvarbool1 Then
  Lvarbool2 = True
End If

If Lvarbool2 Or Lvarstr1 <> Lvarstr5 Then
  db.Execute("DELETE FROM customer WHERE " _ 
      & "reserve_field_12 = '" & Lvarstr1 & "'")
End If

...

You can imagine how hard it is to fix old or someone else's code...

Latest update: Now we are also using "reserve procedures" for private members:

Private Sub LSub1(Lvarlong1 As Long, Lvarstr1 As String)
  If Lvarlong1 >= 0 Then 
    Lvarbool1 = LFunc1(Lvarstr1)
  Else
    Lvarbool1 = LFunc6()
  End If
  If Lvarbool1 Then
    LSub4 Lvarstr1
  End If
End Sub

EDIT: It seems that this code pattern is becoming more and more popular. See this The Daily WTF post to learn more: Astigmatism :)


At my first job, all C programs, no matter how simple or complex, had only four functions. You had the main, which called the other three functions in turn. I can't remember their names, but they were something along the lines of begin(), middle(), and end(). begin() opened files and database connections, end() closed them, and middle() did everything else. Needless to say, middle() was a very long function.

And just to make things even better, all variables had to be global.

One of my proudest memories of that job is having been part of the general revolt that led to the destruction of those standards.


Back in my COBOL days, we had to use three asterisks for comments (COBOL requires only one asterisk in column 7). We even had a pre-compiler that checked for this, and wouldn't compile your program if you used anything but three asterisks.