[language-agnostic] What's your most controversial programming opinion?

This is definitely subjective, but I'd like to try to avoid it becoming argumentative. I think it could be an interesting question if people treat it appropriately.

The idea for this question came from the comment thread from my answer to the "What are five things you hate about your favorite language?" question. I contended that classes in C# should be sealed by default - I won't put my reasoning in the question, but I might write a fuller explanation as an answer to this question. I was surprised at the heat of the discussion in the comments (25 comments currently).

So, what contentious opinions do you hold? I'd rather avoid the kind of thing which ends up being pretty religious with relatively little basis (e.g. brace placing) but examples might include things like "unit testing isn't actually terribly helpful" or "public fields are okay really". The important thing (to me, anyway) is that you've got reasons behind your opinions.

Please present your opinion and reasoning - I would encourage people to vote for opinions which are well-argued and interesting, whether or not you happen to agree with them.

This question is related to language-agnostic

The answer is


You can't write a web application without a remote debugger

Web applications typically tie together interactions between multiple languages on the client and server side, require interaction from a user and often include third-party code that can be anything from a simple API implementation to a byzantine framework.

I've lost count of the number of times I've had another developer sat with me while I step into and follow through what's actually going on in a complex web application with a decent remote debugger to see them flabbergasted and amazed that such tools exist. Often they still don't take the trouble to install and setup these kinds of tools even after seeing them in action.

You just can't debug a non trivial web application with print statements. Times ten if you didn't right all the code in your application.

If your debugger can step through all the various languages in use and show you the http transactions taking place then so much the better.

You can't develop web applications without Firebug

Along similar lines, once you have used Firebug (or very near equivalent) you look on anyone trying to develop web applications with a mixture of pity and horror. Particularly with Firebug showing computed styles, if you remember back to NOT using it and spending hours randomly changing various bits of CSS and adding "!important" in too many places to be funny you will never go back.


If you need to read the manual, the software isn't good enough.

Plain and simple :-)


"Don't call virtual methods from constructors". This is only sometimes a PITA, but is only so because in C# I cannot decide at which point in a constructor to call my base class's constructor. Why not? The .NET framework allows it, so what good reason is there for C# to not allow it?

Damn!


Apparently it is controversial that IDE's should check to see whether they can link up the code they create before wasting time compiling

But I'm of the opinion that I shouldn't compile a zillion lines of code only to realize that Windows has a lock on the file I'm trying to create because another programmer has some weird threading issue that requires him to Delay Unloading DLLs for 3 minutes after they aren't supposed to be used.


Most professional programmers suck

I have come across too many people doing this job for their living who were plain crappy at what they were doing. Crappy code, bad communication skills, no interest in new technology whatsoever. Too many, too many...


For a good programmer language is not a problem.

It may be not very controvertial but I hear a lot o whining from other programmers like "why don't they all use delphi?", "C# sucks", "i would change company if they forced me to use java" and so on.
What i think is that a good programmer is flexible and is able to write good programms in any programming language that he might have to learn in his life


C (or C++) should be the first programming language

The first language should NOT be the easy one, it should be one that sets up the student's mind and prepare it for serious computer science.
C is perfect for that, it forces students to think about memory and all the low level stuff, and at the same time they can learn how to structure their code (it has functions!)

C++ has the added advantage that it really sucks :) thus the students will understand why people had to come up with Java and C#


"Googling it" is okay!

Yes, I know it offends some people out there that their years of intense memorization and/or glorious stacks of programming books are starting to fall by the wayside to a resource that anyone can access within seconds, but you shouldn't hold that against people that use it.

Too often I hear googling answers to problems the result of criticism, and it really is without sense. First of all, it must be conceded that everyone needs materials to reference. You don't know everything and you will need to look things up. Conceding that, does it really matter where you got the information? Does it matter if you looked it up in a book, looked it up on Google, or heard it from a talking frog that you hallucinated? No. A right answer is a right answer.

What is important is that you understand the material, use it as the means to an end of a successful programming solution, and the client/your employer is happy with the results.

(although if you are getting answers from hallucinatory talking frogs, you should probably get some help all the same)


Junior programmers should be assigned to doing object/ module design and design maintenance for several months before they are allowed to actually write or modify code.

Too many programmers/developers make it to the 5 and 10 year marks without understanding the elements of good design. It can be crippling later when they want to advance beyond just writing and maintaining code.


A real programmer loves open-source like a soulmate and loves Microsoft as a dirty but satisfying prostitute


The simplest approach is the best approach

Programmers like to solve assumed or inferred requirements that add levels of complexity to a solution.

"I assume this block of code is going to be a performance bottleneck, therefore I will add all this extra code to mitigate this problem."

"I assume the user is going to want to do X, therefore I will add this really cool additional feature."

"If I make my code solve for this unneeded scenario it will be a good opportunity to use this new technology I've been interested in trying out."

In reality, the simplest solution that meets the requirements is best. This also gives you the most flexibility in taking your solution in a new direction if and when new requirements or problems come up.


Print statements are a valid way to debug code

I believe it is perfectly fine to debug your code by littering it with System.out.println (or whatever print statement works for your language). Often, this can be quicker than debugging, and you can compare printed outputs against other runs of the app.

Just make sure to remove the print statements when you go to production (or better, turn them into logging statements)


A degree in computer science does not---and is not supposed to---teach you to be a programmer.

Programming is a trade, computer science is a field of study. You can be a great programmer and a poor computer scientist and a great computer scientist and an awful programmer. It is important to understand the difference.

If you want to be a programmer, learn Java. If you want to be a computer scientist, learn at least three almost completely different languages. e.g. (assembler, c, lisp, ruby, smalltalk)


Junior programmers should be assigned to doing object/ module design and design maintenance for several months before they are allowed to actually write or modify code.

Too many programmers/developers make it to the 5 and 10 year marks without understanding the elements of good design. It can be crippling later when they want to advance beyond just writing and maintaining code.


"Comments are Lies"

Comments don't run and are easily neglected. It's better to express the intention with clear, refactored code illustrated by unit tests. (Unit tests written TDD of course...)

We don't write comments because they're verbose and obscure what's really going on in the code. If you feel the need to comment - find out what's not clear in the code and refactor/write clearer tests until there's no need for the comment...

... something I learned from Extreme Programming (assumes of course that you have established team norms for cleaning the code...)


Software development is just a job

Don't get me wrong, I enjoy software development a lot. I've written a blog for the last few years on the subject. I've spent enough time on here to have >5000 reputation points. And I work in a start-up doing typically 60 hour weeks for much less money than I could get as a contractor because the team is fantastic and the work is interesting.

But in the grand scheme of things, it is just a job.

It ranks in importance below many things such as family, my girlfriend, friends, happiness etc., and below other things I'd rather be doing if I had an unlimited supply of cash such as riding motorbikes, sailing yachts, or snowboarding.

I think sometimes a lot of developers forget that developing is just something that allows us to have the more important things in life (and to have them by doing something we enjoy) rather than being the end goal in itself.


I also think there's nothing wrong with having binaries in source control.. if there is a good reason for it. If I have an assembly I don't have the source for, and might not necessarily be in the same place on each devs machine, then I will usually stick it in a "binaries" directory and reference it in a project using a relative path.

Quite a lot of people seem to think I should be burned at the stake for even mentioning "source control" and "binary" in the same sentence. I even know of places that have strict rules saying you can't add them.


Web services absolutely suck, and are not the way of the future. They are ridiculously inefficient and they don't guarantee ordered delivery. Web services should NEVER be used within a system where both client and server are being written. They are mostly useful for micky mouse mash-up type applications. They should definitely not be used for any kind of connection-oriented communication.

This stance has gotten myself and colleagues into some very heated discussions, since web services is such a buzzy topic. Any project that mandates the use of web services is doomed because it is clearly already having ridiculous demands pushed down from management.


The best code is often the code you don't write. As programmers we want to solve every problem by writing some cool method. Anytime we can solve a problem and still give the users 80% of what they want without introducing more code to maintain and test we have provided waaaay more value.


To Be A Good Programmer really requires working in multiple aspects of the field: Application development, Systems (Kernel) work, User Interface Design, Database, and so on. There are certain approaches common to all, and certain approaches that are specific to one aspect of the job. You need to learn how to program Java like a Java coder, not like a C++ coder and vice versa. User Interface design is really hard, and uses a different part of your brain than coding, but implementing that UI in code is yet another skill as well. It is not just that there is no "one" approach to coding, but there is not just one type of coding.


My controversial opinion: Object Oriented Programming is absolutely the worst thing that's ever happened to the field of software engineering.

The primary problem with OOP is the total lack of a rigorous definition that everyone can agree on. This easily leads to implementations that have logical holes in them, or language like Java that adhere to this bizarre religious dogma about what OOP means, while forcing the programmer into doing all these contortions and "design patterns" just to work around the limitations of a particular OOP system.

So, OOP tricks the programmer into thinking they're making these huge productivity gains, that OOP is somehow a "natural" way to think, while forcing the programmer to type boatloads of unnecessary boilerplate.

Then since nobody knows what OOP actually means, we get vast amounts of time wasted on petty arguments about whether language X or Y is "truly OOP" or not, what bizarre cargo cultish language features are absolutely "essential" for a language to be considered "truly OOP".

Instead of demanding that this language or that language be "truly oop", we should be looking at what language features are shown by experiment, to actually increase productivity, instead of trying to force it into being some imagined ideal language, or indeed forcing our programs to conform to some platonic ideal of a "truly object oriented program".

Instead of insisting that our programs conform to some platonic ideal of "Truly object oriented", how about we focus on adhering to good engineering principles, making our code easy to read and understand, and using the features of a language that are productive and helpful, regardless of whether they are "OOP" enough or not.


A good developer needs to know more than just how to code


I am of the opinion that there are too many people making programming decisions that shouldn't be worried about implementation.


A good developer needs to know more than just how to code


Less code is better than more!

If the users say "that's it?", and your work remains invisible, it's done right. Glory can be found elsewhere.


It's okay to be Mort

Not everyone is a "rockstar" programmer; some of us do it because it's a good living, and we don't care about all the latest fads and trends; we just want to do our jobs.


Software engineers should not work with computer science guys

Their differences : SEs care about code reusability, while CSs just suss out code SEs care about performance, while CSs just want to have things done now SEs care about whole structure, while CSs do not give a toss ...


Two lines of code is too many.

If a method has a second line of code, it is a code smell. Refactor.


Every developer should be familiar with the basic architecture of modern computers. This also applies to developers who target a virtual machine (maybe even more so, because they have been told time and time again that they don't need to worry themselves with memory management etc.)


I also think there's nothing wrong with having binaries in source control.. if there is a good reason for it. If I have an assembly I don't have the source for, and might not necessarily be in the same place on each devs machine, then I will usually stick it in a "binaries" directory and reference it in a project using a relative path.

Quite a lot of people seem to think I should be burned at the stake for even mentioning "source control" and "binary" in the same sentence. I even know of places that have strict rules saying you can't add them.


Estimates are for me, not for you

Estimates are a useful tool for me, as development line manager, to plan what my team is working on.

They are not a promise of a feature's delivery on a specific date, and they are not a stick for driving the team to work harder.

IMHO if you force developers to commit to estimates you get the safest possible figure.

For instance -

I think a feature will probably take me around 5 days. There's a small chance of an issue that would make it take 30 days.

If the estimates are just for planning then we'll all work to 5 days, and account for the small chance of an issue should it arise.

However - if meeting that estimate is required as a promise of delivery what estimate do you think gets given?

If a developer's bonus or job security depends on meeting an estimate do you think they give their most accurate guess or the one they're most certain they will meet?

This opinion of mine is controversial with other management, and has been interpreted as me trying to worm my way out of having proper targets, or me trying to cover up poor performance. It's a tough sell every time, but one that I've gotten used to making.


A majority of the 'user-friendly' Fourth Generation Languages (SQL included) are worthless overrated pieces of rubbish that should have never made it to common use.

4GLs in general have a wordy and ambiguous syntax. Though 4GLs are supposed to allow 'non technical people' to write programs, you still need the 'technical' people to write and maintain them anyway.

4GL programs in general are harder to write, harder to read and harder to optimize than.

4GLs should be avoided as far as possible.


The best programmers trace all their code in the debugger and test all paths.

Well... the OP said controversial!


Software development is just a job

Don't get me wrong, I enjoy software development a lot. I've written a blog for the last few years on the subject. I've spent enough time on here to have >5000 reputation points. And I work in a start-up doing typically 60 hour weeks for much less money than I could get as a contractor because the team is fantastic and the work is interesting.

But in the grand scheme of things, it is just a job.

It ranks in importance below many things such as family, my girlfriend, friends, happiness etc., and below other things I'd rather be doing if I had an unlimited supply of cash such as riding motorbikes, sailing yachts, or snowboarding.

I think sometimes a lot of developers forget that developing is just something that allows us to have the more important things in life (and to have them by doing something we enjoy) rather than being the end goal in itself.


Never make up your mind on an issue before thoroughly considering said issue. No programming standard EVER justifies approaching an issue in a poor manner. If the standard demands a class to be written, but after careful thought, you deem a static method to be more appropriate, always go with the static method. Your own discretion is always better than even the best forward thinking of whoever wrote the standard. Standards are great if you're working in a team, but rules are meant to be broken (in good taste, of course).


Design Patterns are a symptom of Stone Age programming language design

They have their purpose. A lot of good software gets built with them. But the fact that there was a need to codify these "recipes" for psychological abstractions about how your code works/should work speaks to a lack of programming languages expressive enough to handle this abstraction for us.

The remedy, I think, lies in languages that allow you to embed more and more of the design into the code, by defining language constructs that might not exist or might not have general applicability but really really make sense in situations your code deals with incessantly. The Scheme people have known this for years, and there are things possible with Scheme macros that would make most monkeys-for-hire piss their pants.


You need to watch out for Object-Obsessed Programmers.

e.g. if you write a class that models built-in types such as ints or floats, you may be an object-obsessed programmer.


There are far too many programmers who write far too much code.


Garbage collection is overrated

Many people consider the introduction of garbage collection in Java one of the biggest improvements compared to C++. I consider the introduction to be very minor at best, well written C++ code does all the memory management at the proper places (with techniques like RAII), so there is no need for a garbage collector.


Exceptions considered harmful.


C must die.

Voluntarily programming in C when another language (say, D) is available should be punishable for neglect.


I often get shouted down when I claim that the code is merely an expression of my design. I quite dislike the way I see so many developers design their system "on the fly" while coding it.

The amount of time and effort wasted when one of these cowboys falls off his horse is amazing - and 9 times out of 10 the problem they hit would have been uncovered with just a little upfront design work.

I feel that modern methodologies do not emphasize the importance of design in the overall software development process. Eg, the importance placed on code reviews when you haven't even reviewed your design! It's madness.


I'm probably gonna get roasted for this, but:

Making invisible characters syntactically significant in python was a bad idea

It's distracting, causes lots of subtle bugs for novices and, in my opinion, wasn't really needed. About the only code I've ever seen that didn't voluntarily follow some sort of decent formatting guide was from first-year CS students. And even if code doesn't follow "nice" standards, there are plenty of tools out there to coerce it into a more pleasing shape.


Here's mine:

"You don't need (textual) syntax to express objects and their behavior."

I subscribe to the ideas of Jonathan Edwards and his Subtext project - http://alarmingdevelopment.org/


Managers know everything

It's been my experience that managers didn't get there by knowing code usually. No matter what you tell them it's too long, not right or too expensive.

And another that follows on from the first:

There's never time to do it right but there's always time to do it again

A good engineer friend once said that in anger to describe a situation where management halved his estimates, got a half-assed version out of him then gave him twice as much time to rework it because it failed. It's a fairly regular thing in the commercial software world.

And one that came to mind today while trying to configure a router with only a web interface:

Web interfaces are for suckers

The CLI on the previous version of the firmware was oh so nice. This version has a web interface, which attempts to hide all of the complexity of networking from clueless IT droids, and can't even get VLANs correct.


My controversial view is that the "While" construct should be removed from all programming languages.

You can easily replicate While using "Repeat" and a boolean flag, and I just don't believe that it's useful to have the two structures. In fact, I think that having both "Repeat...Until" and "While..EndWhile" in a language confuses new programmers.

Update - Extra Notes

One common mistake new programmers make with While is they assume that the code will break as soon as the tested condition flags false. So - If the While test flags false half way through the code, they assume a break out of the While Loop. This mistake isn't made as much with Repeat.

I'm actually not that bothered which of the two loops types is kept, as long as there's only one loop type. Another reason I have for choosing Repeat over While is that "While" functionality makes more sense written using "repeat" than the other way around.

Second Update: I'm guessing that the fact I'm the only person currently running with a negative score here means this actually is a controversial opinion. (Unlike the rest of you. Ha!)


Getters and Setters are Highly Overused

I've seen millions of people claiming that public fields are evil, so they make them private and provide getters and setters for all of them. I believe this is almost identical to making the fields public, maybe a bit different if you're using threads (but generally is not the case) or if your accessors have business/presentation logic (something 'strange' at least).

I'm not in favor of public fields, but against making a getter/setter (or Property) for everyone of them, and then claiming that doing that is encapsulation or information hiding... ha!

UPDATE:

This answer has raised some controversy in it's comments, so I'll try to clarify it a bit (I'll leave the original untouched since that is what many people upvoted).

First of all: anyone who uses public fields deserves jail time

Now, creating private fields and then using the IDE to automatically generate getters and setters for every one of them is nearly as bad as using public fields.

Many people think:

private fields + public accessors == encapsulation

I say (automatic or not) generation of getter/setter pair for your fields effectively goes against the so called encapsulation you are trying to achieve.

Lastly, let me quote Uncle Bob in this topic (taken from chapter 6 of "Clean Code"):

There is a reason that we keep our variables private. We don't want anyone else to depend on them. We want the freedom to change their type or implementation on a whim or an impulse. Why, then, do so many programmers automatically add getters and setters to their objects, exposing their private fields as if they were public?


Software-Reuse is the most important way to optimize software-development

Somehow, software-reuse seamed to be in vogue for some time, but has lost it's charm, when many companies found out that just writing powerpoint presentations with reuse slogans doesn't actually help. They reasoned that software-reuse is just not "good enough" and can't live up to their dreams. So it seams that it is not in vogue any more -- it was replaced by plenty of project management newcomers (Agile for example).

The fact is, that any really good developer by himself performs some kind of software-reuse. I would say Any developer, not doing software-reuse is a bad developer!

I have experienced myself, how much software-reuse can produce performance and stability in development. But of course, a set of PowerPoints and half-hearted confessions of management does not suffice to get its full potential in a company.

I have linked a very old article of mine about software-reuse (see title). It was originally written in German and translated thereafter -- so excuse please, when it is not that good writing.


Inversion of control does not eliminate dependencies, but it sure does a great job of hiding them.


To produce great software, you need domain specialists as much as good developers.


It's a good idea to keep optimisation in mind when developing code.

Whenever I say this, people always reply: "premature optimisation is the root of all evil".

But I'm not saying optimise before you debug. I'm not even saying optimise ever, but when you're designing code, bear in mind the possibility that this might become a bottleneck, and write it so that it will be possible to refactor it for speed, without tearing the API apart.

Hugo


Having a process that involves code being approved before it is merged onto the main line is a terrible idea. It breeds insecurity and laziness in developers, who, if they knew they could be screwing up dozens of people would be very careful about the changes they make, get lulled into a sense of not having to think about all the possible clients of the code they may be affecting. The person going over the code is less likely to have thought about it as much as the person writing it, so it can actually lead to poorer quality code being checked in... though, yes, it will probably follow all the style guidelines and be well commented :)


There are far too many programmers who write far too much code.


VB sucks
While not terribly controversial in general, when you work in a VB house it is


Programmers who spend all day answering questions on Stackoverflow are probably not doing the work they are being paid to do.


Software-Reuse is the most important way to optimize software-development

Somehow, software-reuse seamed to be in vogue for some time, but has lost it's charm, when many companies found out that just writing powerpoint presentations with reuse slogans doesn't actually help. They reasoned that software-reuse is just not "good enough" and can't live up to their dreams. So it seams that it is not in vogue any more -- it was replaced by plenty of project management newcomers (Agile for example).

The fact is, that any really good developer by himself performs some kind of software-reuse. I would say Any developer, not doing software-reuse is a bad developer!

I have experienced myself, how much software-reuse can produce performance and stability in development. But of course, a set of PowerPoints and half-hearted confessions of management does not suffice to get its full potential in a company.

I have linked a very old article of mine about software-reuse (see title). It was originally written in German and translated thereafter -- so excuse please, when it is not that good writing.


Objects Should Never Be In An Invalid State

Unfortunately, so many of the ORM framework mandate zero-arg constructors for all entity classes, using setters to populate the member variables. In those cases, it's very difficult to know which setters must be called in order to construct a valid object.

MyClass c = new MyClass(); // Object in invalid state. Doesn't have an ID.
c.setId(12345); // Now object is valid.

In my opinion, it should be impossible for an object to ever find itself in an invalid state, and the class's API should actively enforce its class invariants after every method call.

Constructors and mutator methods should atomically transition an object from one valid state to another. This is much better:

MyClass c = new MyClass(12345); // Object starts out valid. Stays valid.

As the consumer of some library, it's a huuuuuuge pain to keep track of whether all the right setters have been invoked before attempting to use an object, since the documentation usually provides no clues about the class's contract.


Neither Visual Basic or C# trumps the other. They are pretty much the same, save some syntax and formatting.


Every developer should spend several weeks, or even months, developing paper-based systems before they start building electronic ones. They should also then be forced to use their systems.

Developing a good paper-based system is hard work. It forces you to take into account human nature (cumbersome processes get ignored, ones that are too complex tend to break down), and teaches you to appreciate the value of simplicity (new work goes in this tray, work for QA goes in this tray, archiving goes in this box).

Once you've worked out how to build a system on paper, it's often a lot easier to build an effective computer system - one that people will actually want to (and be able to) use.

The systems we develop are not manned by an army of perfectly-trained automata; real people use them, real people who are trained by managers who are also real people and have far too little time to waste training them how to jump through your hoops.

In fact, for my second point:

Every developer should be required to run an interactive training course to show users how to use their software.


Preconditions for arguments to methods/functions should be part of the language rather than programmers checking it always.


Web services absolutely suck, and are not the way of the future. They are ridiculously inefficient and they don't guarantee ordered delivery. Web services should NEVER be used within a system where both client and server are being written. They are mostly useful for micky mouse mash-up type applications. They should definitely not be used for any kind of connection-oriented communication.

This stance has gotten myself and colleagues into some very heated discussions, since web services is such a buzzy topic. Any project that mandates the use of web services is doomed because it is clearly already having ridiculous demands pushed down from management.


Code == Design

I'm no fan of sophisticated UML diagrams and endless code documentation. In a high level language, your code should be readable and understandable as is. Complex documentation and diagrams aren't really any more user friendly.


Here's an article on the topic of Code as Design.


Best practices aren't.


Exceptions should only be used in truly exceptional cases

It seems like the use of exceptions has run rampant on the projects I've worked on recently.

Here's an example:

We have filters that intercept web requests. The filter calls a screener, and the screener's job is to check to see if the request has certain input parameters and validate the parameters. You set the fields to check for, and the abstract class makes sure the parameters are not blank, then calls a screen() method implemented by your particular class to do more extended validation:

public boolean processScreener(HttpServletRequest req, HttpServletResponse resp, FilterConfig filterConfig) throws Exception{           
            // 
            if (!checkFieldExistence(req)){
                    return false;
            }
            return screen(req,resp,filterConfig);
    }

That checkFieldExistance(req) method never returns false. It returns true if none of the fields are missing, and throws an exception if a field is missing.

I know that this is bad design, but part of the problem is that some architects here believe that you need to throw an exception every time you hit something unexpected.

Also, I am aware that the signature of checkFieldExistance(req) does throw an Exception, its just that almost all of our methods do - so it didn't occur to me that the method might throw an exception instead of returning false. Only until I dug through the code I noticed it.


The more process you put around programming, the worse the code becomes

I have noticed something in my 8 or so years of programming, and it seems ridiculous. It's that the only way to get quality is to employ quality developers, and remove as much process and formality from them as you can. Unit testing, coding standards, code/peer reviews, etc only reduce quality, not increase it. It sounds crazy, because the opposite should be true (more unit testing should lead to better code, great coding standards should lead to more readable code, code reviews should improve the quality of code) but it's not.

I think it boils down to the fact we call it "Software Engineering" when really it's design and not engineering at all.


Some numbers to substantiate this statement:

From the Editor

IEEE Software, November/December 2001

Quantifying Soft Factors

by Steve McConnell

...

Limited Importance of Process Maturity

... In comparing medium-size projects (100,000 lines of code), the one with the worst process will require 1.43 times as much effort as the one with the best process, all other things being equal. In other words, the maximum influence of process maturity on a project’s productivity is 1.43. ...

... What Clark doesn’t emphasize is that for a program of 100,000 lines of code, several human-oriented factors influence productivity more than process does. ...

... The seniority-oriented factors alone (AEXP, LTEX, PEXP) exert an influence of 3.02. The seven personnel-oriented factors collectively (ACAP, AEXP, LTEX, PCAP, PCON, PEXP, and SITE §) exert a staggering influence range of 25.8! This simple fact accounts for much of the reason that non-process-oriented organizations such as Microsoft, Amazon.com, and other entrepreneurial powerhouses can experience industry-leading productivity while seemingly shortchanging process. ...

The Bottom Line

... It turns out that trading process sophistication for staff continuity, business domain experience, private offices, and other human-oriented factors is a sound economic tradeoff. Of course, the best organizations achieve high motivation and process sophistication at the same time, and that is the key challenge for any leading software organization.

§ Read the article for an explanation of these acronyms.


Programming is in its infancy.

Even though programming languages and methodologies have been evolving very quickly for years now, we still have a long way to go. The signs are clear:

  1. Language Documentation is spread haphazardly across the internet (stackoverflow is helping here).

  2. Languages cannot evolve syntactically without breaking prior versions.

  3. Debugging is still often done with printf.

  4. Language libraries or other forms of large scale code reuse are still pretty rare.

Clearly all of these are improving, but it would be nice if we all could agree that this is the beginning and not the end=).


QA should know the code (indirectly) better than development. QA gets paid to find things development didn't intend to happen, and they often do. :) (Btw, I'm a developer who just values good QA guys a whole bunch -- far to few of them... far to few).


A Good Programmer Hates Coding

Similar to "A Good Programmer is a Lazy Programmer" and "Less Code is Better." But by following this philosophy, I have managed to write applications which might otherwise use several times as much code (and take several times as much development time). In short: think before you code. Most of the parts of my own programs which end up causing problems later were parts that I actually enjoyed coding, and thus had too much code, and thus were poorly written. Just like this paragraph.

A Good Programmer is a Designer

I've found that programming uses the same concepts as design (as in, the same design concepts used in art). I'm not sure most other programmers find the same thing to be true; maybe it is a right brain/left brain thing. Too many programs out there are ugly, from their code to their command line user interface to their graphical user interface, and it is clear that the designers of these programs were not, in fact, designers.

Although correlation may not, in this case, imply causation, I've noticed that as I've become better at design, I've become better at coding. The same process of making things fit and feel right can and should be used in both places. If code doesn't feel right, it will cause problems because either a) it is not right, or b) you'll assume it works in a way that "feels right" later, and it will then again be not right.

Art and code are not on opposite ends of the spectrum; code can be used in art, and can itself be a form of art.

Disclaimer: Not all of my code is pretty or "right," unfortunately.


System.Data.DataSet Rocks!

Strongly-typed DataSets are better, in my opinion, than custom DDD objects for most business applications.

Reasoning: We're bending over backwards to figure out Unit of Work on custom objects, LINQ to SQL, Entity Framework and it's adding complexity. Use a nice code generator from somewhere to generate the data layer and the Unit of Work sits on the object collections (DataTable and DataSet)--no mystery.


"Googling it" is okay!

Yes, I know it offends some people out there that their years of intense memorization and/or glorious stacks of programming books are starting to fall by the wayside to a resource that anyone can access within seconds, but you shouldn't hold that against people that use it.

Too often I hear googling answers to problems the result of criticism, and it really is without sense. First of all, it must be conceded that everyone needs materials to reference. You don't know everything and you will need to look things up. Conceding that, does it really matter where you got the information? Does it matter if you looked it up in a book, looked it up on Google, or heard it from a talking frog that you hallucinated? No. A right answer is a right answer.

What is important is that you understand the material, use it as the means to an end of a successful programming solution, and the client/your employer is happy with the results.

(although if you are getting answers from hallucinatory talking frogs, you should probably get some help all the same)


Readability is the most important aspect of your code.

Even more so than correctness. If it's readable, it's easy to fix. It's also easy to optimize, easy to change, easy to understand. And hopefully other developers can learn something from it too.


If you have any idea how to program you are not fit to place a button on a form

Is that controversial enough? ;)

No matter how hard we try, it's almost impossible to have appropriate empathy with 53 year old Doris who has to use our order-entry software. We simply cannot grasp the mental model of what she imagines is going on inside the computer, because we don't need to imagine: we know whats going on, or have a very good idea.

Interaction Design should be done by non-programmers. Of course, this is never actually going to happen. Contradictorily I'm quite glad about that; I like UI design even though deep down I know I'm unsuited to it.

For further info, read the book The Inmates Are Running the Asylum. Be warned, I found this book upsetting and insulting; it's a difficult read if you are a developer that cares about the user's experience.


Opinion: Never ever have different code between "debug" and "release" builds

The main reason being that release code almost never gets tested. Better to have the same code running in test as it is in the wild.


Reuse of code is inversely proportional to its "reusability". Simply because "reusable" code is more complex, whereas quick hacks are easy to understand, so they get reused.

Software failures should take down the system, so that it can be examined and fixed. Software attempting to handle failure conditions is often worse than crashing. ie, is it better to have a system reset after crashing, or should it be indefinitely hung because the failure handler has a bug?


Hibernate is useless and damaging to the minds of developers.


Premature optimization is NOT the root of all evil! Lack of proper planning is the root of all evil.

Remember the old naval saw

Proper Planning Prevents P*ss Poor Performance!


My controversial view is that the "While" construct should be removed from all programming languages.

You can easily replicate While using "Repeat" and a boolean flag, and I just don't believe that it's useful to have the two structures. In fact, I think that having both "Repeat...Until" and "While..EndWhile" in a language confuses new programmers.

Update - Extra Notes

One common mistake new programmers make with While is they assume that the code will break as soon as the tested condition flags false. So - If the While test flags false half way through the code, they assume a break out of the While Loop. This mistake isn't made as much with Repeat.

I'm actually not that bothered which of the two loops types is kept, as long as there's only one loop type. Another reason I have for choosing Repeat over While is that "While" functionality makes more sense written using "repeat" than the other way around.

Second Update: I'm guessing that the fact I'm the only person currently running with a negative score here means this actually is a controversial opinion. (Unlike the rest of you. Ha!)


Design patterns are hurting good design more than they're helping it.

IMO software design, especially good software design is far too varied to be meaningfully captured in patterns, especially in the small number of patterns people can actually remember - and they're far too abstract for people to really remember more than a handful. So they're not helping much.

And on the other hand, far too many people become enamoured with the concept and try to apply patterns everywhere - usually, in the resulting code you can't find the actual design between all the (completely meaningless) Singletons and Abstract Factories.


In my workplace, I've been trying to introduce more Agile/XP development habits. Continuous Design is the one I've felt most resistance on so far. Maybe I shouldn't have phrased it as "let's round up all of the architecture team and shoot them"... ;)


PHP sucks ;-)

The proof is in the pudding.


90 percent of programmers are pretty damn bad programmers, and virtually all of us have absolutely no tools to evaluate our current ability level (although we can generally look back and realize how bad we USED to suck)

I wasn't going to post this because it pisses everyone off and I'm not really trying for a negative score or anything, but:

A) isn't that the point of the question, and

B) Most of the "Answers" in this thread prove this point

I heard a great analogy the other day: Programming abilities vary AT LEAST as much as sports abilities. How many of us could jump into a professional team and actually improve their chances?


The latest design patterns tend to be so much snake oil. As has been said previously in this question, overuse of design patterns can harm a design much more than help it.

If I hear one more person saying that "everyone should be using IOC" (or some similar pile of turd), I think I'll be forced to hunt them down and teach them the error of their ways.


Apparently mine is that Haskell has variables. This is both "trivial" (according to at least eight SO users) (though nobody can seem to agree on which trivial answer is correct), and a bad question even to ask (according to at least five downvoters and four who voted to close it). Oh, and I (and computing scientests and mathematicians) am wrong, though nobody can provide me a detailed explanation of why.


Classes should fit on the screen.

If you have to use the scroll bar to see all of your class, your class is too big.

Code folding and miniature fonts are cheating.


Okay, I said I'd give a bit more detail on my "sealed classes" opinion. I guess one way to show the kind of answer I'm interested in is to give one myself :)

Opinion: Classes should be sealed by default in C#

Reasoning:

There's no doubt that inheritance is powerful. However, it has to be somewhat guided. If someone derives from a base class in a way which is completely unexpected, this can break the assumptions in the base implementation. Consider two methods in the base class, where one calls another - if these methods are both virtual, then that implementation detail has to be documented, otherwise someone could quite reasonably override the second method and expect a call to the first one to work. And of course, as soon as the implementation is documented, it can't be changed... so you lose flexibility.

C# took a step in the right direction (relative to Java) by making methods sealed by default. However, I believe a further step - making classes sealed by default - would have been even better. In particular, it's easy to override methods (or not explicitly seal existing virtual methods which you don't override) so that you end up with unexpected behaviour. This wouldn't actually stop you from doing anything you can currently do - it's just changing a default, not changing the available options. It would be a "safer" default though, just like the default access in C# is always "the most private visibility available at that point."

By making people explicitly state that they wanted people to be able to derive from their classes, we'd be encouraging them to think about it a bit more. It would also help me with my laziness problem - while I know I should be sealing almost all of my classes, I rarely actually remember to do so :(

Counter-argument:

I can see an argument that says that a class which has no virtual methods can be derived from relatively safely without the extra inflexibility and documentation usually required. I'm not sure how to counter this one at the moment, other than to say that I believe the harm of accidentally-unsealed classes is greater than that of accidentally-sealed ones.


Anonymous functions suck.

I'm teaching myself jQuery and, while it's an elegant and immensely useful technology, most people seem to treat it as some kind of competition in maximizing the user of anonymous functions.

Function and procedure naming (along with variable naming) is the greatest expressive ability we have in programming. Passing functions around as data is a great technique, but making them anonymous and therefore non-self-documenting is a mistake. It's a lost chance for expressing the meaning of the code.


I generally hold pretty controversial, strong and loud opinions, so here's just a couple of them:

"Because we're a Microsoft outfit/partner/specialist" is never a valid argument.

The company I'm working in now identifies itself, first and foremost, as a Microsoft specialist. So the aforementioned argument gets thrown around quite a bit, and I've yet to see a context where it's valid.

I can't see why it's a reason to promote Microsoft's technology and products in every applicable corner, overriding customer and employee satisfaction, and general pragmatics.

This just a cornerstone of my deep hatred towards politics in software business.

MOSS (Microsoft Office Sharepoint Server) is a piece of shit.

Kinda echoes the first opinion, but I honestly think MOSS, as it is, should be shot out of the market. It costs gazillions to license and set up, pukes on web standards and makes developers generally pretty unhappy. I have yet to see a MOSS project that has an overall positive outcome.

Yet time after time, a customer approaches us and asks for a MOSS solution.


I believe that the "Let's Rewrite The Past And Try To Fix That Bug Pretending Nothing Ever Worked" is a valuable debugging mantra in desperate situations:

https://stackoverflow.com/questions/978904/do-you-use-the-orwellian-past-rewriting-debugging-philosophy-closed


I don't believe that any question related to optimization should be flooded with a chant of the misquoted "Premature optimization is the root of all evil"s because code that is optimized into obfuscation is what makes coding fun


Microsoft should stop supporting anything dealing with Visual Basic.


My controversial opinion is probably that John Carmack (ID Software, Quake etc.) is not a very good programmer.

Don't get me wrong, he's a very smart programmer in my opinion, but after I noticed the line "#define private public" in the quake sourcecode I couldn't help but think he's a guy that gets the job done nomatter what, but in my definition not a good programmer :) This opinion has gotten me into a lot of heated discussions though ;)


Assembly is the best first programming language.


Requirements analysis, specification, design, and documentation will almost never fit into a "template." You are 100% of the time better off by starting with a blank document and beginning to type with a view of "I will explain this in such a way that if I were dead and someone else read this document, they would know everything that I know and see and understand now" and then organizing from there, letting section headings and such develop naturally and fit the task you are specifying, rather than being constrained to some business or school's idea of what your document should look like. If you have to do a diagram, rather than using somebody's formal and incomprehensible system, you're often better off just drawing a diagram that makes sense, with a clear legend, which actually specifies the system you are trying to specify and communicates the information that the developer on the other end (often you, after a few years) needs to receive.

[If you have to, once you've written the real documentation, you can often shoehorn it into whatever template straightjacket your organization is imposing on you. You'll probably find yourself having to add section headings and duplicate material, though.]

The only time templates for these kinds of documents make sense is when you have a large number of tasks which are very similar in nature, differing only in details. "Write a program to allow single-use remote login access through this modem bank, driving the terminal connection nexus with C-Kermit," "Produce a historical trend and forecast report for capacity usage," "Use this library to give all reports the ability to be faxed," "Fix this code for the year 2000 problem," and "Add database triggers to this table to populate a software product provided for us by a third-party vendor" can not all be described by the same template, no matter what people may think. And for the record, the requirements and design diagramming techniques that my college classes attempted to teach me and my classmates could not be used to specify a simple calculator program (and everyone knew it).


PHP sucks ;-)

The proof is in the pudding.


VB 6 could be used for good as well as evil. It was a Rapid Application Development environment in a time of over complicated coding.

I have hated VB vehemently in the past, and still mock VB.NET (probably in jest) as a Fisher Price language due to my dislike of classical VB, but in its day, nothing could beat it for getting the job done.


Design patterns are hurting good design more than they're helping it.

IMO software design, especially good software design is far too varied to be meaningfully captured in patterns, especially in the small number of patterns people can actually remember - and they're far too abstract for people to really remember more than a handful. So they're not helping much.

And on the other hand, far too many people become enamoured with the concept and try to apply patterns everywhere - usually, in the resulting code you can't find the actual design between all the (completely meaningless) Singletons and Abstract Factories.


Development teams should be segregated more often by technological/architectural layers instead of business function.

I come from a general culture where developers own "everything from web page to stored procedure". So in order to implement a feature in the system/application, they would prepare the database table schemas, write the stored procs, match the data access code, implement the business logic and web service methods, and the web page interfaces.

And guess what? Everybody has their own way to doing things! Everyone struggles to learn the ASP.NET AJAX and Telerik or Infragistic suites, Enterprise Library or other productivity and data layer and persistence frameworks, Aspect-oriented frameworks, logging and caching application blocks, DB2 or Oracle percularities. And guess what? Everybody takes heck of a long time to learn how to do things the proper way! Meaning, lots of mistakes in the meantime and plenty of resulting defects and performance bottlenecks! And heck of a longer time to fix them! Across each and every layer! Everybody has a hand in every Visual Studio project. Nobody is specialised to handle and optmise one problem/technology domain. Too many chefs spoil the soup. All the chefs result in some radioactive goo.

Developers may have cross-layer/domain responsibilities, but they should not pretend that they can be masters of all disciplines, and should be limited to only a few. In my experience, when a project is not a small one and utilises lots of technologies, covering more business functions in a single layer is more productive (as well as encouraging more test code test that layer) than covering less business functions spanning the entire architectural stack (which motivates developers to test only via their UI and not test code).


Every developer should spend several weeks, or even months, developing paper-based systems before they start building electronic ones. They should also then be forced to use their systems.

Developing a good paper-based system is hard work. It forces you to take into account human nature (cumbersome processes get ignored, ones that are too complex tend to break down), and teaches you to appreciate the value of simplicity (new work goes in this tray, work for QA goes in this tray, archiving goes in this box).

Once you've worked out how to build a system on paper, it's often a lot easier to build an effective computer system - one that people will actually want to (and be able to) use.

The systems we develop are not manned by an army of perfectly-trained automata; real people use them, real people who are trained by managers who are also real people and have far too little time to waste training them how to jump through your hoops.

In fact, for my second point:

Every developer should be required to run an interactive training course to show users how to use their software.


Software development is just a job

Don't get me wrong, I enjoy software development a lot. I've written a blog for the last few years on the subject. I've spent enough time on here to have >5000 reputation points. And I work in a start-up doing typically 60 hour weeks for much less money than I could get as a contractor because the team is fantastic and the work is interesting.

But in the grand scheme of things, it is just a job.

It ranks in importance below many things such as family, my girlfriend, friends, happiness etc., and below other things I'd rather be doing if I had an unlimited supply of cash such as riding motorbikes, sailing yachts, or snowboarding.

I think sometimes a lot of developers forget that developing is just something that allows us to have the more important things in life (and to have them by doing something we enjoy) rather than being the end goal in itself.


The vast majority of software being developed does not involve the end-user when gathering requirements.

Usually it's just some managers who are providing 'requirements'.


Programming is so easy a five year old can do it.

Programming in and of itself is not hard, it's common sense. You are just telling a computer what to do. You're not a genius, please get over yourself.


Most comments in code are in fact a pernicious form of code duplication.

We spend most of our time maintaining code written by others (or ourselves) and poor, incorrect, outdated, misleading comments must be near the top of the list of most annoying artifacts in code.

I think eventually many people just blank them out, especially those flowerbox monstrosities.

Much better to concentrate on making the code readable, refactoring as necessary, and minimising idioms and quirkiness.

On the other hand, many courses teach that comments are very nearly more important than the code itself, leading to the this next line adds one to invoiceTotal style of commenting.


Code Generation is bad

I hate languages that require you to make use of code generation (or copy&paste) for simple things, like JavaBeans with all their Getters and Setters.

C#'s AutoProperties are a step in the right direction, but for nice DTOs with Fields, Properties and Constructor parameters you still need a lot of redundancy.


The best code is often the code you don't write. As programmers we want to solve every problem by writing some cool method. Anytime we can solve a problem and still give the users 80% of what they want without introducing more code to maintain and test we have provided waaaay more value.


This one is not exactly on programming, because html/css are not programming languages.

Tables are ok for layout

css and divs can't do everything, save yourself the hassle and use a simple table, then use css on top of it.


Don't use stored procs in your database.

The reasons they were originally good - security, abstraction, single connection - can all be done in your middle tier with ORMs that integrate lots of other advantages.

This one is definitely controversial. Every time I bring it up, people tear me apart.


Sometimes you have to denormalize your databases.

An opinion that doesn't go well with most programmers but you have to sacrifice things like noramlization for performance sometimes.


Correct every defect when it's discovered. Not just "severity 1" defects; all defects.

Establish a deployment mechanism that makes application updates immediately available to users, but allows them to choose when to accept these updates. Establish a direct communication mechanism with users that enables them to report defects, relate their experience with updates, and suggest improvements.

With aggressive testing, many defects can be discovered during the iteration in which they are created; immediately correcting them reduces developer interrupts, a significant contributor to defect creation. Immediately correcting defects reported by users forges a constructive community, replacing product quality with product improvement as the main topic of conversation. Implementing user-suggested improvements that are consistent with your vision and strategy produces community of enthusiastic evangelists.


Programming is so easy a five year old can do it.

Programming in and of itself is not hard, it's common sense. You are just telling a computer what to do. You're not a genius, please get over yourself.


Developers are all different, and should be treated as such.

Developers don't fit into a box, and shouldn't be treated as such. The best language or tool for solving a problem has just as much to do with the developers as it does with the details of the problem being solved.


"Programmers are born, not made."


Opinion: SQL is code. Treat it as such

That is, just like your C#, Java, or other favorite object/procedure language, develop a formatting style that is readable and maintainable.

I hate when I see sloppy free-formatted SQL code. If you scream when you see both styles of curly braces on a page, why or why don't you scream when you see free formatted SQL or SQL that obscures or obfuscates the JOIN condition?


SQL could and should have been done better. Because its original spec was limited, various venders have been extending the language in different directions for years. SQL that is written for MS-SQL is different than SQL for Oracle, IBM, MySQL, Sybase, etc. Other serious languages (take C++ for example) were carefully standardized so that C++ written under one compiler will generally compile unmodified under another. Why couldn't SQL have been designed and standardized better?

HTML was a seriously broken choice as a browser display language. We've spent years extending it through CSS, XHTML, Javascript, Ajax, Flash, etc. in order to make a useable UI, and the result is still not as good as your basic thick-client windows app. Plus, a competent web programmer now needs to know three or four languages in order to make a decent UI.

Oh yeah. Hungarian notation is an abomination.


For a good programmer language is not a problem.

It may be not very controvertial but I hear a lot o whining from other programmers like "why don't they all use delphi?", "C# sucks", "i would change company if they forced me to use java" and so on.
What i think is that a good programmer is flexible and is able to write good programms in any programming language that he might have to learn in his life


It's okay to be Mort

Not everyone is a "rockstar" programmer; some of us do it because it's a good living, and we don't care about all the latest fads and trends; we just want to do our jobs.


Opinion: SQL is code. Treat it as such

That is, just like your C#, Java, or other favorite object/procedure language, develop a formatting style that is readable and maintainable.

I hate when I see sloppy free-formatted SQL code. If you scream when you see both styles of curly braces on a page, why or why don't you scream when you see free formatted SQL or SQL that obscures or obfuscates the JOIN condition?


Most Programmers are Useless at Programming

(You did say 'controversial')

I was sat in my office at home pondering some programming problem and I ended up looking at my copy of 'Complete Spectrum ROM Disassembly' on my bookshelf and thinking:

"How many programmers today could write the code used in the Spectrum's ROM?"

The Spectrum, for those unfamiliar with it, had a Basic programming language that could do simple 2D graphics (lines, curves), file IO of a sort and floating point calculations including transendental functions all in 16K of Z80 code (a < 5Mhz 8bit processor that had no FPU or integer multiply). Most graduates today would have trouble writing a 'Hello World' program that was that small.

I think the problem is that the absolute number of programmers that could do that has hardly changed but as a percentage it is quickly approaching zero. Which means that the quality of code being written is decreasing as more sub-par programmers enter the field.

Where I'm currently working, there are seven programmers including myself. Of these, I'm the only one who keeps up-to-date by reading blogs, books, this site, etc and doing programming 'for fun' at home (my wife is constantly amazed by this). There's one other programmer who is keen to write well structured code (interestingly, he did a lot of work using Delphi) and to refactor poor code. The rest are, well, not great. Thnking about it, you could describe them as 'brute force' programmers - will force inappropriate solutions until they work after a fashion (e.g. using C# arrays with repeated array.Resize to dynamically add items instead of using a List).

Now, I don't know if the place I'm currently at is typical, although from my previous positions I would say it is. With the benefit of hindsight, I can see common patterns that certainly didn't help any of the projects (lack of peer review of code for one).

So, 5 out of 7 programmers are rubbish.

Skizz


Delphi is fun

Yes, I know it's outdated, but Delphi was and is a very fun tool to develop with.


Stay away from Celko!!!!

http://www.dbdebunk.com/page/page/857309.htm

I think it makes a lot more sense to use surrogate primary keys then "natural" primary keys.


@ocdecio: Fabian Pascal gives (in chapter 3 of his book Practical issues in database management, cited in point 3 at the page that you link) as one of the criteria for choosing a key that of stability (it always exists and doesn't change). When a natural key does not possesses such property, than a surrogate key must be used, for evident reasons, to which you hint in comments.

You don't know what he wrote and you have not bothered to check, otherwise you could discover that you actually agree with him. Nothing controversial there: he was saying "don't be dogmatic, adapt general guidelines to circumstances, and, above all, think, use your brain instead of a dogmatic/cookbook/words-of-guru approach".


Lazy Programmers are the Best Programmers

A lazy programmer most often finds ways to decrease the amount of time spent writing code (especially a lot of similar or repeating code). This often translates into tools and workflows that other developers in the company/team can benefit from.

As the developer encounters similar projects he may create tools to bootstrap the development process (e.g. creating a DRM layer that works with the company's database design paradigms).

Furthermore, developers such as these often use some form of code generation. This means all bugs of the same type (for example, the code generator did not check for null parameters on all methods) can often be fixed by fixing the generator and not the 50+ instances of that bug.

A lazy programmer may take a few more hours to get the first product out the door, but will save you months down the line.


USE of Desgin patterns and documentation

in web devlopment whats use of these things never felt any use of it


All project managers should be required to have coding tasks

In teams that I have worked where the project manager was actually a programmer who understood the technical issues of the code well enough to accomplish coding tasks, the decisions that were made lacked the communication disconnect that often happens in teams where the project manager is not involved in the code.


Debuggers are a crutch.

It's so controversial that even I don't believe it as much as I used to.

Con: I spend more time getting up to speed on other people's voluminous code, so anything that help with "how did I get here" and "what is happening" either pre-mortem or post-mortem can be helpful.

Pro: However, I happily stand by the idea that if you don't understand the answers to those questions for code that you developed yourself or that you've become familiar with, spending all your time in a debugger is not the solution, it's part of the problem.

Before hitting 'Post Your Answer' I did a quick Google check for this exact phrase, it turns out that I'm not the only one who has held this opinion or used this phrase. I turned up a long discussion of this very question on the Fog Creek software forum, which cited various luminaries including Linus Torvalds as notable proponents.



The vast majority of software being developed does not involve the end-user when gathering requirements.

Usually it's just some managers who are providing 'requirements'.


Newer languages, and managed code do not make a bad programmer better.


Linq2Sql is not that bad

I've come across a lot of posts trashing Linq2Sql. I know it's not perfect, but what is?

Personally, I think it has its drawbacks, but overall it can be great for prototyping, or for developing small to medium apps. When I consider how much time it has saved me from writing boring DAL code, I can't complain, especially considering the alternatives we had not so long ago.


VB sucks
While not terribly controversial in general, when you work in a VB house it is


The best code is often the code you don't write. As programmers we want to solve every problem by writing some cool method. Anytime we can solve a problem and still give the users 80% of what they want without introducing more code to maintain and test we have provided waaaay more value.


Never change what is not broken.


Recursion is fun.

Yes, I know it can be an ineffectual use of stack space, and all that jazz. But some times a recursive algorithm is just so nice and clean compared to it's iterative counterpart. I always get a bit gleeful when I can sneak a recursive function in somewhere.


Regurgitating well known sayings by programming greats out of context with the zeal of a fanatic and the misplaced assumption that they are ironclad rules really gets my goat. For example 'premature optimization is the root of all evil' as covered by this thread.

IMO, many technical problems and solutions are very context sensitive and the notion of global best practices is a fallacy.


Simplicity Vs Optimality

I believe its very difficult to write code that's both simple and optimal.


Coding is an Art

Some people think coding is an art, and others think coding is a science.

The "science" faction argues that as the target is to obtain the optimal code for a situation, then coding is the science of studying how to obtain this optimal.

The "art" faction argues there are many ways to obtain the optimal code for a situation, the process is full of subjectivity, and that to choose wisely based on your own skills and experience is an art.


You'll never use enough languages, simply because every language is the best fit for only a tiny class of problems, and it's far too difficult to mix languages.

Pet examples: Java should be used only when the spec is very well thought out (because of lots of interdependencies meaning refactoring hell) and when working with concrete concepts. Perl should only be used for text processing. C should only be used when speed trumps everything, including flexibility and security. Key-value pairs should be used for one-dimensional data, CSV for two-dimensional data, XML for hierarchical data, and a DB for anything more complex.


Programming is neither art nor science. It is an engineering discipline.

It's not art: programming requires creativity for sure. That doesn't make it art. Code is designed and written to work properly, not to be emotionally moving. Except for whitespace, changing code for aesthetic reasons breaks your code. While code can be beautiful, art is not the primary purpose.

It's not science: science and technology are inseparable, but programming is in the technology category. Programming is not systematic study and observation; it is design and implementation.

It's an engineering discipline: programmers design and build things. Good programmers design for function. They understand the trade-offs of different implementation options and choose the one that suits the problem they are solving.


I'm sure there are those out there who would love to parse words, stretching the definitions of art and science to include programming or constraining engineering to mechanical machines or hardware only. Check the dictionary. Also "The Art of Computer Programming" is a different usage of art that means a skill or craft, as in "the art of conversation." The product of programming is not art.


Logger configs are a waste of time. Why have them if it means learning a new syntax, especially one that fails silently? Don't get me wrong, I love good logging. I love logger inheritance and adding formatters to handlers to loggers. But why do it in a config file?

Do you want to make changes to logging code without recompiling? Why? If you put your logging code in a separate class, file, whatever, what difference will it make?

Do you want to distribute a configurable log with your product to clients? Doesn't this just give too much information anyway?

The most frustrating thing about it is that popular utilities written in a popular language tend to write good APIs in the format that language specifies. Write a Java logging utility and I know you've generated the javadocs, which I know how to navigate. Write a domain specific language for your logger config and what do we have? Maybe there's documentation, but where the heck is it? You decide on a way to organize it, and I'm just not interested in following your line of thought.


One I have been tossing around for a while:

The data is the system.

Processes and software are built for data, not the other way around.

Without data, the process/software has little value. Data still has value without a process or software around it.

Once we understand the data, what it does, how it interacts, the different forms it exists in at different stages, only then can a solution be built to support the system of data.

Successful software/systems/processes seem to have an acute awareness, if not fanatical mindfulness of "where" the data is at in any given moment.


Tools, Methodology, Patterns, Frameworks, etc. are no substitute for a properly trained programmer

I'm sick and tired of dealing with people (mostly managers) who think that the latest tool, methodology, pattern or framework is a silver bullet that will eliminate the need for hiring experienced developers to write their software. Although, as a consultant who makes a living rescuing at-risk projects, I shouldn't complain.


Opinion: most code out there is crappy, because that's what the programmers WANT it to be.

Indirectly, we have been nurturing a culture of extreme creativeness. It's not that I don't think problem solving has creative elements -- it does -- it's just that it's not even remotely the same as something like painting (see Paul Graham's famous "Hackers and Painters" essay).

If we bend our industry towards that approach, ultimately it means letting every programmer go forth and whack out whatever highly creative, crazy stuff they want. Of course, for any sizable project, trying to put together dozens of unrelated, unstructured, unplanned bits into one final coherent bit won't work by definition. That's not a guess, or an estimate, it's the state of the industry that we face today. How many times have you seen sub-bits of functionality in a major program that were completely inconsistent with the rest of the code? It's so common now, it's a wonder anyone cause use any of these messes.

Convoluted, complicated, ugly stuff that just keeps getting worse and more unstable. If we were building something physical, everyone on the planet would call us out on how horribly ugly and screwed up the stuff is, but because it more or less hidden by being virtual, we are able to get away with some of the worst manufacturing processing that our species will ever see. (Can you imagine a car where four different people designed the four different wheels, in four different ways?)

But the sad part, the controversial part of it all, is that there is absolutely NO reason for it to be this way, other than historically the culture was towards more freedom and less organization, so we stayed that way (and probably got a lot worse). Software development is a joke, but it's a joke because that's what the programmers want it to be (but would never in a million years admit that it was true, a "plot by management" is a better reason for most people).

How long will we keep shooting ourselves in the foot, before we wake up and realize that we the ones holding the gun, pointing it and also pulling the trigger?

Paul.


There is no "one size fits all" approach to development

I'm surprised that this is a controversial opinion, because it seems to me like common sense. However, there are many entries on popular blogs promoting the "one size fits all" approach to development so I think I may actually be in the minority.

Things I've seen being touted as the correct approach for any project - before any information is known about it - are things like the use of Test Driven Development (TDD), Domain Driven Design (DDD), Object-Relational Mapping (ORM), Agile (capital A), Object Orientation (OO), etc. etc. encompassing everything from methodologies to architectures to components. All with nice marketable acronyms, of course.

People even seem to go as far as putting badges on their blogs such as "I'm Test Driven" or similar, as if their strict adherence to a single approach whatever the details of the project project is actually a good thing.

It isn't.

Choosing the correct methodologies and architectures and components, etc., is something that should be done on a per-project basis, and depends not only on the type of project you're working on and its unique requirements, but also the size and ability of the team you're working with.


A Clever Programmer Is Dangerous

I have spent more time trying to fix code written by "clever" programmers. I'd rather have a good programmer than an exceptionally smart programmer who wants to prove how clever he is by writing code that only he (or she) can interpret.


Useful and clean high-level abstractions are significantly more important than performance

one example:

Too often I watch peers spending hours writing over complicated Sprocs, or massive LINQ queries which return unintuitive anonymous types for the sake of "performance".

They could achieve almost the same performance but with considerably cleaner, intuitive code.


Explicit self in Python's method declarations is poor design choice.

Method calls got syntactic sugar, but declarations didn't. It's a leaky abstraction (by design!) that causes annoying errors, including runtime errors with apparent off-by-one error in reported number of arguments.


Print statements are a valid way to debug code

I believe it is perfectly fine to debug your code by littering it with System.out.println (or whatever print statement works for your language). Often, this can be quicker than debugging, and you can compare printed outputs against other runs of the app.

Just make sure to remove the print statements when you go to production (or better, turn them into logging statements)


Dependency Management Software Does More Harm Than Good

I've worked on Java projects that included upwards of a hundred different libraries. In most cases, each library has its own dependencies, and those dependent libraries have their own dependencies too.

Software like Maven or Ivy supposedly "manage" this problem by automatically fetching the correct version of each library and then recursively fetching all of its dependencies.

Problem solved, right?

Wrong.

Downloading libraries is the easy part of dependency management. The hard part is creating a mental model of the software, and how it interacts with all those libraries.

My unpopular opinion is this:

If you can't verbally explain, off the top of your head, the basic interactions between all the libraries in your project, you should eliminate dependencies until you can.

Along the same lines, if it takes you longer than ten seconds to list all of the libraries (and their methods) invoked either directly or indirectly from one of your functions, then you are doing a poor job of managing dependencies.

You should be able to easily answer the question "which parts of my application actually depend on library XYZ?"

The current crop of dependency management tools do more harm than good, because they make it easy to create impossibly-complicated dependency graphs, and they provide virtually no functionality for reducing dependencies or identifying problems.

I've seen developers include 10 or 20 MB worth of libraries, introducing thousands of dependent classes into the project, just to eliminate a few dozen lines of simple custom code.

Using libraries and frameworks can be good. But there's always a cost, and tools which obscure that cost are inherently problematic.

Moreover, it's sometimes (note: certainly not always) better to reinvent the wheel by writing a few small classes that implement exactly what you need than to introduce a dependency on a large general-purpose library.


I think we should move away from 'C'. Its too old!. But, the old dog is still barking louder!!


You can't measure productivity by counting lines of code.

Everyone knows this, but for some reason the practice still persists!


A picture is not worth a thousand words.

Some pictures might be worth a thousand words. Most of them are not. This trite old aphorism is mostly untrue and is a pathetic excuse for many a lazy manager who did not want to read carefully created reports and documentation to say "I need you to show me in a diagram."

My wife studied for a linguistics major and saw several fascinating proofs against the conventional wisdom on pictures and logos: they do not break across language and cultural barriers, they usually do not communicate anywhere near as much information as correct text, they simply are no substitute for real communication.

In particular, labeled bubbles connected with lines are useless if the lines are unlabeled and unexplained, and/or if every line has a different meaning instead of signifying the same relationship (unless distinguished from each other in some way). If your lines sometimes signify relationships and sometimes indicate actions and sometimes indicate the passage of time, you're really hosed.

Every good programmer knows you use the tool suited for the job at hand, right? Not all systems are best specified and documented in pictures. Graphical specification languages that can be automatically turned into provably-correct, executable code or whatever are a spectacular idea, if such things exist. Use them when appropriate, not for everything under the sun. Entity-Relationship diagrams are great. But not everything can be summed up in a picture.

Note: a table may be worth its weight in gold. But a table is not the same thing as a picture. And again, a well-crafted short prose paragraph may be far more suitable for the job at hand.


You don't have to program everything

I'm getting tired that everything, but then everything needs to be stuffed in a program, like that is always faster. everything needs to be webbased, evrything needs to be done via a computer. Please, just use your pen and paper. it's faster and less maintenance.


Rob Pike wrote: "Data dominates. If you've chosen the right data structures and organized things well, the algorithms will almost always be self-evident. Data structures, not algorithms, are central to programming."

And since these days any serious data is in the millions of records, I content that good data modeling is the most important programming skill (whether using a rdbms or something like sqlite or amazon simpleDB or google appengine data storage.)

Fancy search and sorting algorithms aren't needed any more when the data, all the data, is stored in such a data storage system.


Write small methods. It seems that programmers love to write loooong methods where they do multiple different things.

I think that a method should be created wherever you can name one.


If you have any idea how to program you are not fit to place a button on a form

Is that controversial enough? ;)

No matter how hard we try, it's almost impossible to have appropriate empathy with 53 year old Doris who has to use our order-entry software. We simply cannot grasp the mental model of what she imagines is going on inside the computer, because we don't need to imagine: we know whats going on, or have a very good idea.

Interaction Design should be done by non-programmers. Of course, this is never actually going to happen. Contradictorily I'm quite glad about that; I like UI design even though deep down I know I'm unsuited to it.

For further info, read the book The Inmates Are Running the Asylum. Be warned, I found this book upsetting and insulting; it's a difficult read if you are a developer that cares about the user's experience.


1-based arrays should always be used instead of 0-based arrays. 0-based arrays are unnatural, unnecessary, and error prone.

When I count apples or employees or widgets I start at one, not zero. I teach my kids the same thing. There is no such thing as a 0th apple or 0th employee or 0th widget. Using 1 as the base for an array is much more intuitive and less error-prone. Forget about plus-one-minus-one-hell (as we used to call it). 0-based arrays are an unnatural construct invented by the computer science - they do not reflect reality and computer programs should reflect reality as much as possible.


Using Stored Procedures

Unless you are writing a large procedural function composed of non-reusable SQL queries, please move your stored procedures of the database and into version control.


Zealous adherence to standards stands in the way of simplicity.

MVC is over-rated for websites. It's mostly just VC, sometimes M.


Code layout does matter

Maybe specifics of brace position should remain purely religious arguments - but it doesn't mean that all layout styles are equal, or that there are no objective factors at all!

The trouble is that the uber-rule for layout, namely: "be consistent", sound as it is, is used as a crutch by many to never try to see if their default style can be improved on - and that, furthermore, it doesn't even matter.

A few years ago I was studying Speed Reading techniques, and some of the things I learned about how the eye takes in information in "fixations", can most optimally scan pages, and the role of subconsciously picking up context, got me thinking about how this applied to code - and writing code with it in mind especially.

It led me to a style that tended to be columnar in nature, with identifiers logically grouped and aligned where possible (in particular I became strict about having each method argument on its own line). However, rather than long columns of unchanging structure it's actually beneficial to vary the structure in blocks so that you end up with rectangular islands that the eye can take in in a single fixture - even if you don't consciously read every character.

The net result is that, once you get used to it (which typically takes 1-3 days) it becomes pleasing to the eye, easier and faster to comprehend, and is less taxing on the eyes and brain because it's laid out in a way that makes it easier to take in.

Almost without exception, everyone I have asked to try this style (including myself) initially said, "ugh I hate it!", but after a day or two said, "I love it - I'm finding it hard not to go back and rewrite all my old stuff this way!".

I've been hoping to find the time to do more controlled experiments to collect together enough evidence to write a paper on, but as ever have been too busy with other things. However this seemed like a good opportunity to mention it to people interested in controversial techniques :-)

[Edit]

I finally got around to blogging about this (after many years parked in the "meaning to" phase): Part one, Part two, Part three.


I have a few... there's exceptions to everything so these are not hard and fast but they do apply in most cases

Nobody cares if your website validates, is XHTML strict, is standards-compliant, or has a W3C badge.

It may earn you some high-fives from fellow Web developers, but the rest of people looking at your site could give a crap whether you've validated your code or not. the vast majority of Web surfers are using IE or Firefox, and since both of those browsers are forgiving of nonstandards, nonstrict, invalidated HTML then you really dont need to worry about it. If you've built a site for a car dealer, a mechanic, a radio station, a church, or a local small business, how many people in any of those businesses' target demographics do you think care about valid HTML? I'd hazard a guess it's pretty close to 0.

Most open-source software is useless, overcomplicated crap.

Let me install this nice piece of OSS I've found. It looks like it should do exactly what I want! Oh wait, first I have to install this other window manager thingy. OK. Then i need to get this command-line tool and add it to my path. Now I need the latest runtimes for X, Y, and Z. now i need to make sure i have these processes running. ok, great... its all configured. Now let me learn a whole new set of commands to use it. Oh cool, someone built a GUI for it. I guess I don't need to learn these commands. Wait, I need this library on here to get the GUI to work. Gotta download that now. ok, now its working...crap, I can't figure out this terrible UI.

sound familiar? OSS is full of complication for complication's sake, tricky installs that you need to be an expert to perform, and tools that most people wouldn't know what to do with anyway. So many projects fall by the wayside, others are so niche that very few people would use them, and some of the decent ones (FlowPlayer, OSCommerce, etc) have such ridiculously overcomplicated and bloated source code that it defeats the purpose of being able to edit the source. You can edit the source... if you can figure out which of the 400 files contains the code that needs modification. You're really in trouble when you learn that its all 400 of them.


Correct every defect when it's discovered. Not just "severity 1" defects; all defects.

Establish a deployment mechanism that makes application updates immediately available to users, but allows them to choose when to accept these updates. Establish a direct communication mechanism with users that enables them to report defects, relate their experience with updates, and suggest improvements.

With aggressive testing, many defects can be discovered during the iteration in which they are created; immediately correcting them reduces developer interrupts, a significant contributor to defect creation. Immediately correcting defects reported by users forges a constructive community, replacing product quality with product improvement as the main topic of conversation. Implementing user-suggested improvements that are consistent with your vision and strategy produces community of enthusiastic evangelists.


C (or C++) should be the first programming language

The first language should NOT be the easy one, it should be one that sets up the student's mind and prepare it for serious computer science.
C is perfect for that, it forces students to think about memory and all the low level stuff, and at the same time they can learn how to structure their code (it has functions!)

C++ has the added advantage that it really sucks :) thus the students will understand why people had to come up with Java and C#


Developers are all different, and should be treated as such.

Developers don't fit into a box, and shouldn't be treated as such. The best language or tool for solving a problem has just as much to do with the developers as it does with the details of the problem being solved.


Cleanup and refactoring are very important in (team) development

A lot of work in team development has to do with management. If you are using a bug tracker than it is only useful if someone takes the time to close/structure things and lower the amount of tickets. If you are using a source code management somebody needs to cleanup here and restructure the repository quite often. If you are programming than there should be people caring about refactoring of the lazy produced stuff of others. It is part of most of the aspects some will face while doing software development.

Everybody agrees to the necessity of this kind of management. And it is always the first thing that is skipped!


My most controversial programming opinion is that finding performance problems is not about measuring, it is about capturing.

If you're hunting for elephants in a room (as opposed to mice) do you need to know how big they are? NO! All you have to do is look. Their very bigness is what makes them easy to find! It isn't necessary to measure them first.

The idea of measurement has been common wisdom at least since the paper on gprof (Susan L. Graham, et al 1982)*, when all along, right under our noses, has been a very simple and direct way to find code worth optimizing.

As a small example, here's how it works. Suppose you take 5 random-time samples of the call stack, and you happen to see a particular instruction on 3 out of 5 samples. What does that tell you?

.............   .............   .............   .............   .............
.............   .............   .............   .............   .............
Foo: call Bar   .............   .............   Foo: call Bar   .............
.............   Foo: call Bar   .............   .............   .............
.............   .............   .............   Foo: call Bar   .............
.............   .............   .............   .............   .............
                .............                                   .............

It tells you the program is spending 60% of its time doing work requested by that instruction. Removing it removes that 60%:

...\...../...   ...\...../...   .............   ...\...../...   .............
....\.../....   ....\.../....   .............   ....\.../....   .............
Foo: \a/l Bar   .....\./.....   .............   Foo: \a/l Bar   .............
......X......   Foo: cXll Bar   .............   ......X......   .............
...../.\.....   ...../.\.....   .............   Foo: /a\l Bar   .............
..../...\....   ..../...\....   .............   ..../...\....   .............
   /     \      .../.....\...                      /     \      .............

Roughly.

If you can remove the instruction (or invoke it a lot less), that's a 2.5x speedup, approximately. (Notice - recursion is irrelevant - if the elephant's pregnant, it's not any smaller.) Then you can repeat the process, until you truly approach an optimum.

  • This did not require accuracy of measurement, function timing, call counting, graphs, hundreds of samples, any of that typical profiling stuff.

Some people use this whenever they have a performance problem, and don't understand what's the big deal.

Most people have never heard of it, and when they do hear of it, think it is just an inferior mode of sampling. But it is very different, because it pinpoints problems by giving cost of call sites (as well as terminal instructions), as a percent of wall-clock time. Most profilers (not all), whether they use sampling or instrumentation, do not do that. Instead they give a variety of summary measurements that are, at best, clues to the possible location of problems. Here is a more extensive summary of the differences.

*In fact that paper claimed that the purpose of gprof was to "help the user evaluate alternative implementations of abstractions". It did not claim to help the user locate the code needing an alternative implementation, at a finer level then functions.


My second most controversial opinion is this, or it might be if it weren't so hard to understand.


Sometimes it's okay to use regexes to extract something from HTML. Seriously, wrangle with an obtuse parser, or use a quick regex like /<a href="([^"]+)">/? It's not perfect, but your software will be up and running much quicker, and you can probably use yet another regex to verify that the match that was extracted is something that actually looks like a URL. Sure, it's hacky, and probably fails on several edge-cases, but it's good enough for most usage.

Based on the massive volume of "How use regex get HTML?" questions that get posted here almost daily, and the fact that every answer is "Use an HTML parser", this should be controversial enough.


I can live without closures.

Looks like nowadays everyone and their mother want closures to be present in a language because it is the greatest invention since sliced bread. And I think it is just another hype.


A Good Programmer Hates Coding

Similar to "A Good Programmer is a Lazy Programmer" and "Less Code is Better." But by following this philosophy, I have managed to write applications which might otherwise use several times as much code (and take several times as much development time). In short: think before you code. Most of the parts of my own programs which end up causing problems later were parts that I actually enjoyed coding, and thus had too much code, and thus were poorly written. Just like this paragraph.

A Good Programmer is a Designer

I've found that programming uses the same concepts as design (as in, the same design concepts used in art). I'm not sure most other programmers find the same thing to be true; maybe it is a right brain/left brain thing. Too many programs out there are ugly, from their code to their command line user interface to their graphical user interface, and it is clear that the designers of these programs were not, in fact, designers.

Although correlation may not, in this case, imply causation, I've noticed that as I've become better at design, I've become better at coding. The same process of making things fit and feel right can and should be used in both places. If code doesn't feel right, it will cause problems because either a) it is not right, or b) you'll assume it works in a way that "feels right" later, and it will then again be not right.

Art and code are not on opposite ends of the spectrum; code can be used in art, and can itself be a form of art.

Disclaimer: Not all of my code is pretty or "right," unfortunately.


Write your spec when you are finished coding. (if at all)

In many projects I have been involved in, a great deal of effort was spent at the outset writing a "spec" in Microsoft Word. This process culminated in a "sign off" meeting when the big shots bought in on the project, and after that meeting nobody ever looked at this document again. These documents are a complete waste of time and don't reflect how software is actually designed. This is not to say there are not other valuable artifacts of application design. They are usually contained on index cards, snapshots of whiteboards, cocktail napkins and other similar media that provide a kind of timeline for the app design. These are usually are the real specs of the app. If you are going to write a Word document, (and I am not particularly saying you should) do it at the end of the project. At least it will accurately represent what has been done in the code and might help someone down the road like the the QA team or the next version developers.


As there are hundreds of answers to this mine will probably end up unread, but here's my pet peeve anyway.

If you're a programmer then you're most likely awful at Web Design/Development

This website is a phenomenal resource for programmers, but an absolutely awful place to come if you're looking for XHTML/CSS help. Even the good Web Developers here are handing out links to resources that were good in the 90's!

Sure, XHTML and CSS are simple to learn. However, you're not just learning a language! You're learning how to use it well, and very few designers and developers can do that, let alone programmers. It took me ages to become a capable designer and even longer to become a good developer. I could code in HTML from the age of 10 but that didn't mean I was good. Now I am a capable designer in programs like Photoshop and Illustrator, I am perfectly able to write a good website in Notepad and am able to write basic scripts in several languages. Not only that but I have a good nose for Search Engine Optimisation techniques and can easily tell you where the majority of people are going wrong (hint: get some good content!).

Also, this place is a terrible resource for advice on web standards. You should NOT just write code to work in the different browsers. You should ALWAYS follow the standard to future-proof your code. More often than not the fixes you use on your websites will break when the next browser update comes along. Not only that but the good browsers follow standards anyway. Finally, the reason IE was allowed to ruin the Internet was because YOU allowed it by coding your websites for IE! If you're going to continue to do that for Firefox then we'll lose out yet again!

If you think that table-based layouts are as good, if not better than CSS layouts then you should not be allowed to talk on the subject, at least without me shooting you down first. Also, if you think W3Schools is the best resource to send someone to then you're just plain wrong.

If you're new to Web Design/Development don't bother with this place (it's full of programmers, not web developers). Go to a good Web Design/Development community like SitePoint.


The world needs more GOTOs

GOTOs are avoided religiously often with no reasoning beyond "my professor told me GOTOs are bad." They have a purpose and would greatly simplify production code in many places.

That said, they aren't really necessary in 99% of the code you'll ever write.


I'm probably gonna get roasted for this, but:

Making invisible characters syntactically significant in python was a bad idea

It's distracting, causes lots of subtle bugs for novices and, in my opinion, wasn't really needed. About the only code I've ever seen that didn't voluntarily follow some sort of decent formatting guide was from first-year CS students. And even if code doesn't follow "nice" standards, there are plenty of tools out there to coerce it into a more pleasing shape.


C must die.

Voluntarily programming in C when another language (say, D) is available should be punishable for neglect.


Software Architects/Designers are Overrated

As a developer, I hate the idea of Software Architects. They are basically people that no longer code full time, read magazines and articles, and then tell you how to design software. Only people that actually write software full time for a living should be doing that. I don't care if you were the worlds best coder 5 years ago before you became an Architect, your opinion is useless to me.

How's that for controversial?

Edit (to clarify): I think most Software Architects make great Business Analysts (talking with customers, writing requirements, tests, etc), I simply think they have no place in designing software, high level or otherwise.


Requirements analysis, specification, design, and documentation will almost never fit into a "template." You are 100% of the time better off by starting with a blank document and beginning to type with a view of "I will explain this in such a way that if I were dead and someone else read this document, they would know everything that I know and see and understand now" and then organizing from there, letting section headings and such develop naturally and fit the task you are specifying, rather than being constrained to some business or school's idea of what your document should look like. If you have to do a diagram, rather than using somebody's formal and incomprehensible system, you're often better off just drawing a diagram that makes sense, with a clear legend, which actually specifies the system you are trying to specify and communicates the information that the developer on the other end (often you, after a few years) needs to receive.

[If you have to, once you've written the real documentation, you can often shoehorn it into whatever template straightjacket your organization is imposing on you. You'll probably find yourself having to add section headings and duplicate material, though.]

The only time templates for these kinds of documents make sense is when you have a large number of tasks which are very similar in nature, differing only in details. "Write a program to allow single-use remote login access through this modem bank, driving the terminal connection nexus with C-Kermit," "Produce a historical trend and forecast report for capacity usage," "Use this library to give all reports the ability to be faxed," "Fix this code for the year 2000 problem," and "Add database triggers to this table to populate a software product provided for us by a third-party vendor" can not all be described by the same template, no matter what people may think. And for the record, the requirements and design diagramming techniques that my college classes attempted to teach me and my classmates could not be used to specify a simple calculator program (and everyone knew it).


Web services absolutely suck, and are not the way of the future. They are ridiculously inefficient and they don't guarantee ordered delivery. Web services should NEVER be used within a system where both client and server are being written. They are mostly useful for micky mouse mash-up type applications. They should definitely not be used for any kind of connection-oriented communication.

This stance has gotten myself and colleagues into some very heated discussions, since web services is such a buzzy topic. Any project that mandates the use of web services is doomed because it is clearly already having ridiculous demands pushed down from management.


Sometimes it's appropriate to swallow an exception.

For UI bells and wistles, prompting the user with an error message is interuptive, and there is ussually nothing for them to do anyway. In this case, I just log it, and deal with it when it shows up in the logs.


Respect the Single Responsibility Principle

At first glance you might not think this would be controversial, but in my experience when I mention to another developer that they shouldn't be doing everything in the page load method they often push back ... so for the children please quit building the "do everything" method we see all to often.


Most professional programmers suck

I have come across too many people doing this job for their living who were plain crappy at what they were doing. Crappy code, bad communication skills, no interest in new technology whatsoever. Too many, too many...


When someone dismisses an entire programming language as "clumsy", it usually turns out he doesn't know how to use it.


You need to watch out for Object-Obsessed Programmers.

e.g. if you write a class that models built-in types such as ints or floats, you may be an object-obsessed programmer.


Write your spec when you are finished coding. (if at all)

In many projects I have been involved in, a great deal of effort was spent at the outset writing a "spec" in Microsoft Word. This process culminated in a "sign off" meeting when the big shots bought in on the project, and after that meeting nobody ever looked at this document again. These documents are a complete waste of time and don't reflect how software is actually designed. This is not to say there are not other valuable artifacts of application design. They are usually contained on index cards, snapshots of whiteboards, cocktail napkins and other similar media that provide a kind of timeline for the app design. These are usually are the real specs of the app. If you are going to write a Word document, (and I am not particularly saying you should) do it at the end of the project. At least it will accurately represent what has been done in the code and might help someone down the road like the the QA team or the next version developers.


USE of Desgin patterns and documentation

in web devlopment whats use of these things never felt any use of it


It takes less time to produce well-documented code than poorly-documented code

When I say well-documented I mean with comments that communicate your intention clearly at every step. Yes, typing comments takes some time. And yes, your coworkers should all be smart enough to figure out what you intended just by reading your descriptive function and variable names and spelunking their way through all your executable statements. But it takes more of their time to do it than if you had just explained your intentions, and clear documentation is especially helpful when the logic of the code turns out to be wrong. Not that your code would ever be wrong...

I firmly believe that if you time it from when you start a project to when you ship a defect-free product, writing well-documented code takes less time. For one thing, having to explain clearly what you're doing forces you to think it through clearly, and if you can't write a clear, concise explanation of what your code is accomplishing then it's probably not designed well. And for another purely selfish reason, well-documented and well-structured code is far easier to dump onto someone else to maintain - thus freeing the original author to go create the next big thing. I rarely if ever have to stop what I'm doing to explain how my code was meant to work because it's blatantly obvious to anyone who can read English (even if they can't read C/C++/C# etc.). And one more reason is, frankly, my memory just isn't that good! I can't recall what I had for breakfast yesterday, much less what I was thinking when I wrote code a month or a year ago. Perhaps your memory is far better than mine, but because I document my intentions I can quickly pick up wherever I left off and make changes without having to first figure out what I was thinking when I wrote it.

That's why I document well - not because I feel some noble calling to produce pretty code fit for display, and not because I'm a purist, but simply because end-to-end it lets me ship quality software in less time.


Use unit tests as a last resort to verify code.

If you I want to verify that code is correct, I prefer the following techniques over unit testing:

  1. Type checking
  2. Assertions
  3. Trivially verifiable code

For everything else, there's unit tests.


Don't use inheritance unless you can explain why you need it.


Relational Databases are a waste of time. Use object databases instead!

Relational database vendors try to fool us into believing that the only scaleable, persistent and safe storage in the world is relational databases. I am a certified DBA. Have you ever spent hours trying to optimize a query and had no idea what was going wrong? Relational databases don't let you make your own search paths when you need them. You give away much of the control over the speed of your app into the hands of people you've never met and they are not as smart as you think.

Sure, sometimes in a well-maintained database they come up with a quick answer for a complex query. But the price you pay for this is too high! You have to choose between writing raw SQL every time you want to read an entry of your data, which is dangerous. Or use an Object relational mapper which adds more complexity and things outside your control.

More importantly, you are actively forbidden from coming up with smart search algorithms, because every damn roundtrip to the database costs you around 11ms. It is too much. Imagine you know this super-graph algorithm which will answer a specific question, which might not even be expressible in SQL!, in due time. But even if your algorithm is linear, and interesting algorithms are not linear, forget about combining it with a relational database as enumerating a large table will take you hours!

Compare that with SandstoneDb, or Gemstone for Smalltalk! If you are into Java, give db4o a shot.

So, my advice is: Use an object-DB. Sure, they aren't perfect and some queries will be slower. But you will be surprised how many will be faster. Because loading the objects will not require all these strange transofmations between SQL and your domain data. And if you really need speed for a certain query, object databases have the query optimizer you should trust: your brain.


I've been burned for broadcasting these opinions in public before, but here goes:

Well-written code in dynamically typed languages follows static-typing conventions

Having used Python, PHP, Perl, and a few other dynamically typed languages, I find that well-written code in these languages follows static typing conventions, for example:

  • Its considered bad style to re-use a variable with different types (for example, its bad style to take a list variable and assign an int, then assign the variable a bool in the same method). Well-written code in dynamically typed languages doesn't mix types.

  • A type-error in a statically typed language is still a type-error in a dynamically typed language.

  • Functions are generally designed to operate on a single datatype at a time, so that a function which accepts a parameter of type T can only sensibly be used with objects of type T or subclasses of T.

  • Functions designed to operator on many different datatypes are written in a way that constrains parameters to a well-defined interface. In general terms, if two objects of types A and B perform a similar function, but aren't subclasses of one another, then they almost certainly implement the same interface.

While dynamically typed languages certainly provide more than one way to crack a nut, most well-written, idiomatic code in these languages pays close attention to types just as rigorously as code written in statically typed languages.

Dynamic typing does not reduce the amount of code programmers need to write

When I point out how peculiar it is that so many static-typing conventions cross over into dynamic typing world, I usually add "so why use dynamically typed languages to begin with?". The immediate response is something along the lines of being able to write more terse, expressive code, because dynamic typing allows programmers to omit type annotations and explicitly defined interfaces. However, I think the most popular statically typed languages, such as C#, Java, and Delphi, are bulky by design, not as a result of their type systems.

I like to use languages with a real type system like OCaml, which is not only statically typed, but its type inference and structural typing allow programmers to omit most type annotations and interface definitions.

The existence of the ML family of languages demostrates that we can enjoy the benefits of static typing with all the brevity of writing in a dynamically typed language. I actually use OCaml's REPL for ad hoc, throwaway scripts in exactly the same way everyone else uses Perl or Python as a scripting language.


C++ is one of the WORST programming languages - EVER.

It has all of the hallmarks of something designed by committee - it does not do any given job well, and does some jobs (like OO) terribly. It has a "kitchen sink" desperation to it that just won't go away.

It is a horrible "first language" to learn to program with. You get no elegance, no assistance (from the language). Instead you have bear traps and mine fields (memory management, templates, etc.).

It is not a good language to try to learn OO concepts. It behaves as "C with a class wrapper" instead of a proper OO language.

I could go on, but will leave it at that for now. I have never liked programming in C++, and although I "cut my teeth" on FORTRAN, I totally loved programming in C. I still think C was one of the great "classic" languages. Something that C++ is certainly NOT, in my opinion.

Cheers,

-R

EDIT: To respond to the comments on teaching C++. You can teach C++ in two ways - either teaching it as C "on steroids" (start with variables, conditions, loops, etc), or teaching it as a pure "OO" language (start with classes, methods, etc). You can find teaching texts that use one or other of these approaches. I prefer the latter approach (OO first) as it does emphasize the capabilities of C++ as an OO language (which was the original design emphasis of C++). If you want to teach C++ "as C", then I think you should teach C, not C++.

But the problem with C++ as a first language in my experience is that the language is simply too BIG to teach in one semester, plus most "intro" texts try and cover everything. It is simply not possible to cover all the topics in a "first language" course. You have to at least split it into 2 semesters, and then it's no longer "first language", IMO.

I do teach C++, but only as a "new language" - that is, you must be proficient in some prior "pure" language (not scripting or macros) before you can enroll in the course. C++ is a very fine "second language" to learn, IMO.

-R

'Nother Edit: (to Konrad)

I do not at all agree that C++ "is superior in every way" to C. I spent years coding C programs for microcontrollers and other embedded applications. The C compilers for these devices are highly optimized, often producing code as good as hand-coded assembler. When you move to C++, you gain a tremendous overhead imposed by the compiler in order to manage language features you may not use. In embedded applications, you gain little by adding classes and such, IMO. What you need is tight, clean code. You can write it in C++, but then you're really just writing C, and the C compilers are more optimized in these applications.

I wrote a MIDI engine, first in C, later in C++ (at the vendor's request) for an embedded controller (sound card). In the end, to meet the performance requirements (MIDI timings, etc) we had to revert to pure C for all of the core code. We were able to use C++ for the high-level code, and having classes was very sweet - but we needed C to get the performance at the lower level. The C code was an order of magnitude faster than the C++ code, but hand coded assembler was only slightly faster than the compiled C code. This was back in the early 1990s, just to place the events properly.

-R


I fail to understand why people think that Java is absolutely the best "first" programming language to be taught in universities.

For one, I believe that first programming language should be such that it highlights the need to learn control flow and variables, not objects and syntax

For another, I believe that people who have not had experience in debugging memory leaks in C / C++ cannot fully appreciate what Java brings to the table.

Also the natural progression should be from "how can I do this" to "how can I find the library which does that" and not the other way round.


Use unit tests as a last resort to verify code.

If you I want to verify that code is correct, I prefer the following techniques over unit testing:

  1. Type checking
  2. Assertions
  3. Trivially verifiable code

For everything else, there's unit tests.


Object Oriented Programming is overused

Sometimes the best answer is the simple answer.


Code as Design: Three Essays by Jack W. Reeves

The source code of any software is its most accurate design document. Everything else (specs, docs, and sometimes comments) is either incorrect, outdated or misleading.

Guaranteed to get you fired pretty much everywhere.


If you need to read the manual, the software isn't good enough.

Plain and simple :-)


Don't comment your code

Comments are not code and therefore when things change it's very easy to not change the comment that explained the code. Instead I prefer to refactor the crap out of code to a point that there is no reason for a comment. An example:

if(data == null)  // First time on the page

to:

bool firstTimeOnPage = data == null;
if(firstTimeOnPage)

The only time I really comment is when it's a TODO or explaining why

Widget.GetData(); // only way to grab data, TODO: extract interface or wrapper

Boolean variables should be used only for Boolean logic. In all other cases, use enumerations.


Boolean variables are used to store data that can only take on two possible values. The problems that arise from using them are frequently overlooked:

  • Programmers often cannot correctly identify when some piece of data should only have two possible values
  • The people who instruct programmers what to do, such as program managers or whomever writes the specs that programmers follow, often cannot correctly identify this either
  • Even when a piece of data is correctly identified as having only two possible states, that guarantee may not hold in the future.

In these cases, using Boolean variables leads to confusing code that can often be prevented by using enumerations.

Example

Say a programmer is writing software for a car dealership that sells only cars and trucks. The programmer develops a thorough model of the business requirements for his software. Knowing that the only types of vehicles sold are cars and trucks, he correctly identifies that he can use a boolean variable inside a Vehicle class to indicate whether the vehicle is a car or a truck.

class Vehicle {
 bool isTruck;
 ...
}

The software is written so when isTruck is true a vehicle is a truck, and when isTruck is false the vehicle is a car. This is a simple check performed many times throughout the code.

Everything works without trouble, until one day when the car dealership buys another dealership that sells motorcycles as well. The programmer has to update the software so that it works correctly considering the dealership's business has changed. It now needs to identify whether a vehicle is a car, truck, or motorcycle, three possible states.

How should the programmer implement this? isTruck is a boolean variable, so it can hold only two states. He could change it from a boolean to some other type that allows many states, but this would break existing logic and possibly not be backwards compatible. The simplest solution from the programmer's point of view is to add a new variable to represent whether the vehicle is a motorcycle.

class Vehicle {
 bool isTruck;
 bool isMotorcycle;
 ...
}

The code is changed so that when isTruck is true a vehicle is a truck, when isMotorcycle is true a vehicle is a motorcycle, and when they're both false a vehicle is a car.

Problems

There are two big problems with this solution:

  • The programmer wants to express the type of the vehicle, which is one idea, but the solution uses two variables to do so. Someone unfamiliar with the code will have a harder time understanding the semantics of these variables than if the programmer had used just one variable that specifies the type entirely.
  • Solving this motorcycle problem by adding a new boolean doesn't make it any easier for the programmer to deal with such situations that happen in the future. If the dealership starts selling buses, the programmer will have to repeat all these steps over again by adding yet another boolean.

It's not the developer's fault that the business requirements of his software changed, requiring him to revise existing code. But using boolean variables in the first place made his code less flexible and harder to modify to satisfy unknown future requirements (less "future-proof"). When he implemented the changes in the quickest way, the code became harder to read. Using a boolean variable was ultimately a premature optimization.

Solution

Using an enumeration in the first place would have prevented these problems.

enum EVehicleType { Truck, Car }

class Vehicle {
 EVehicleType type;
 ...
}

To accommodate motorcycles in this case, all the programmer has to do is add Motorcycle to EVehicleType, and add new logic to handle the motorcycle cases. No new variables need to be added. Existing logic shouldn't be disrupted. And someone who's unfamiliar with the code can easily understand how the type of the vehicle is stored.

Cliff Notes

Don't use a type that can only ever store two different states unless you're absolutely certain two states will always be enough. Use an enumeration if there are any possible conditions in which more than two states will be required in the future, even if a boolean would satisfy existing requirements.


QA should know the code (indirectly) better than development. QA gets paid to find things development didn't intend to happen, and they often do. :) (Btw, I'm a developer who just values good QA guys a whole bunch -- far to few of them... far to few).


Not really programming, but I can't stand css only layouts just for the sake of it. It's counter productive, frustrating, and makes maintenance a nightmare of floats and margins where changing the position of a single element can throw the entire page out of whack.

It's definitely not a popular opinion, but i'm done with my table layout in 20 minutes while the css gurus spend hours tweaking line-height, margins, padding and floats just to do something as basic as vertically centering a paragraph.


It's fine if you don't know. But you're fired if you can't even google it.

Internet is a tool. It's not making you stupider if you're learning from it.


I really dislike when people tell me to use getters and setters instead of making the variable public when you should be able to both get and set the class variable.

I totally agree on it if it's to change a variable in an object in your object, so you don't get things like: a.b.c.d.e = something; but I would rather use: a.x = something; then a.setX(something); I think a.x = something; actually are both easier to read, and prettier then set/get in the same example.

I don't see the reason by making:

void setX(T x) { this->x = x; }

T getX() { return x; }

which is more code, more time when you do it over and over again, and just makes the code harder to read.


When many new technologies appear on the scene I only learn enough about them to decide if I need them right now.

If not, I put them aside until the rough edges are knocked off by "early adopters" and then check back again every few months / years.


Schooling ruins creativity *

*"Ruins" means "potentially ruins"

Granted, schooling is needed! Everyone needs to learn stuff before they can use it - however, all those great ideas you had about how to do a certain strategy for a specific business-field can easily be thrown into that deep brain-void of ours if we aren't careful.

As you learn new things and acquire new skills, you are also boxing your mindset on those new things and skills, since they apparently are "the way to do it". Being humans, we tend to listen to authorities - being it a teacher, a consultant, a co-worker or even a site / forum you like. We should ALWAYS be aware of that "flaw" in how our mind works. Listen to what other people say, but don't take what they say for granted. Always keep a critic point-of-view on every new information you receive.

Instead of thinking "Wow, that's smart. I will use that from now on", we should think "Wow, that's smart. Now, how can I use that in my personal toolbox of skills and ideas".


80% of bugs are introduced in the design stage.
The other 80% are introduced in the coding stage.

(This opinion was inspired by reading Dima Malenko's answer. "Development is 80% about the design and 20% about coding", yes. "This will produce code with near zero bugs", no.)


Developing on .NET is not programming. Its just stitching together other people's code.

Having come from a coding background where you were required to know the hardware, and where this is still a vital requirements in my industry, I view high level languages as simply assembling someone else's work. Nothing essentially wrong with this, but is it 'programming'?

MS has made a mint from doing the hard work and presenting 'developers' with symbolic instruction syntax. I seem to now know more and more developers who appear constrained by the existence or non-existence of a class to perform a job.

My opinion comes from the notion that to be a programmer you should be able to programme at the lowest level your platform allows. So if you're programming .NET then you need to be able to stick your head under the hood and work out the solution, rather than rely on someone else creating a class for you. That's simply lazy and does not qualify as 'development' in my book.


Primitive data types are premature optimization.

There are languages that get by with just one data type, the scalar, and they do just fine. Other languages are not so fortunate. Developers just throw "int" and "double" in because they have to write in something.

What's important is not how big the data types are, but what the data is used for. If you have a day of the month variable, it doesn't matter much if it's signed or unsigned, or whether it's char, short, int, long, long long, float, double, or long double. It does matter that it's a day of the month, and not a month, or day of week, or whatever. See Joel's column on making things that are wrong look wrong; Hungarian notation as originally proposed was a Good Idea. As used in practice, it's mostly useless, because it says the wrong thing.


When someone dismisses an entire programming language as "clumsy", it usually turns out he doesn't know how to use it.


If you're a developer, you should be able to write code

I did quite a bit of interviewing last year, and for my part of the interview I was supposed to test the way people thought, and how they implemented simple-to-moderate algorithms on a white board. I'd initially started out with questions like:

Given that Pi can be estimated using the function 4 * (1 - 1/3 + 1/5 - 1/7 + ...) with more terms giving greater accuracy, write a function that calculates Pi to an accuracy of 5 decimal places.

It's a problem that should make you think, but shouldn't be out of reach to a seasoned developer (it can be answered in about 10 lines of C#). However, many of our (supposedly pre-screened by the agency) candidates couldn't even begin to answer it, or even explain how they might go about answering it. So after a while I started asking simpler questions like:

Given the area of a circle is given by Pi times the radius squared, write a function to calculate the area of a circle.

Amazingly, more than half the candidates couldn't write this function in any language (I can read most popular languages so I let them use any language of their choice, including pseudo-code). We had "C# developers" who could not write this function in C#.

I was surprised by this. I had always thought that developers should be able to write code. It seems that, nowadays, this is a controversial opinion. Certainly it is amongst interview candidates!


Edit:

There's a lot of discussion in the comments about whether the first question is a good or bad one, and whether you should ask questions as complex as this in an interview. I'm not going to delve into this here (that's a whole new question) apart from to say you're largely missing the point of the post.

Yes, I said people couldn't make any headway with this, but the second question is trivial and many people couldn't make any headway with that one either! Anybody who calls themselves a developer should be able to write the answer to the second one in a few seconds without even thinking. And many can't.


As most others here, I try to adhere to principles like DRY and not being a human compiler.

Another strategy I want to push is "tell, don't ask". Instead of cluttering all objects with getters/setters essentially making a sieve of them, I'd like to tell them to do stuff.

This seems to got straight against good enterprise practices with dumb entity objects and thicker service layer(that does plenty of asking). Hmmm, thoughts?


A majority of the 'user-friendly' Fourth Generation Languages (SQL included) are worthless overrated pieces of rubbish that should have never made it to common use.

4GLs in general have a wordy and ambiguous syntax. Though 4GLs are supposed to allow 'non technical people' to write programs, you still need the 'technical' people to write and maintain them anyway.

4GL programs in general are harder to write, harder to read and harder to optimize than.

4GLs should be avoided as far as possible.


Opinion: SQL is code. Treat it as such

That is, just like your C#, Java, or other favorite object/procedure language, develop a formatting style that is readable and maintainable.

I hate when I see sloppy free-formatted SQL code. If you scream when you see both styles of curly braces on a page, why or why don't you scream when you see free formatted SQL or SQL that obscures or obfuscates the JOIN condition?


Opinion: There should not be any compiler warnings, only errors. Or, formulated differently You should always compile your code with -Werror.

Reason: Either the compiler thinks it is something which should be corrected, in case it should be an error, or it is not necessary to fix, in which case the compiler should just shut up.


Dependency Management Software Does More Harm Than Good

I've worked on Java projects that included upwards of a hundred different libraries. In most cases, each library has its own dependencies, and those dependent libraries have their own dependencies too.

Software like Maven or Ivy supposedly "manage" this problem by automatically fetching the correct version of each library and then recursively fetching all of its dependencies.

Problem solved, right?

Wrong.

Downloading libraries is the easy part of dependency management. The hard part is creating a mental model of the software, and how it interacts with all those libraries.

My unpopular opinion is this:

If you can't verbally explain, off the top of your head, the basic interactions between all the libraries in your project, you should eliminate dependencies until you can.

Along the same lines, if it takes you longer than ten seconds to list all of the libraries (and their methods) invoked either directly or indirectly from one of your functions, then you are doing a poor job of managing dependencies.

You should be able to easily answer the question "which parts of my application actually depend on library XYZ?"

The current crop of dependency management tools do more harm than good, because they make it easy to create impossibly-complicated dependency graphs, and they provide virtually no functionality for reducing dependencies or identifying problems.

I've seen developers include 10 or 20 MB worth of libraries, introducing thousands of dependent classes into the project, just to eliminate a few dozen lines of simple custom code.

Using libraries and frameworks can be good. But there's always a cost, and tools which obscure that cost are inherently problematic.

Moreover, it's sometimes (note: certainly not always) better to reinvent the wheel by writing a few small classes that implement exactly what you need than to introduce a dependency on a large general-purpose library.


How about this one:

Garbage collectors actually hurt programmers' productivity and make resource leaks harder to find and fix

Note that I am talking about resouces in general, and not only memory.


Test Constantly

You have to write tests, and you have to write them FIRST. Writing tests changes the way you write your code. It makes you think about what you want it to actually do before you just jump in and write something that does everything except what you want it to do.

It also gives you goals. Watching your tests go green gives you that little extra bump of confidence that you're getting something accomplished.

It also gives you a basis for writing tests for your edge cases. Since you wrote the code against tests to begin with, you probably have some hooks in your code to test with.

There is not excuse not to test your code. If you don't you're just lazy. I also think you should test first, as the benefits outweigh the extra time it takes to code this way.


Code == Design

I'm no fan of sophisticated UML diagrams and endless code documentation. In a high level language, your code should be readable and understandable as is. Complex documentation and diagrams aren't really any more user friendly.


Here's an article on the topic of Code as Design.


Most comments in code are in fact a pernicious form of code duplication.

We spend most of our time maintaining code written by others (or ourselves) and poor, incorrect, outdated, misleading comments must be near the top of the list of most annoying artifacts in code.

I think eventually many people just blank them out, especially those flowerbox monstrosities.

Much better to concentrate on making the code readable, refactoring as necessary, and minimising idioms and quirkiness.

On the other hand, many courses teach that comments are very nearly more important than the code itself, leading to the this next line adds one to invoiceTotal style of commenting.


Cleanup and refactoring are very important in (team) development

A lot of work in team development has to do with management. If you are using a bug tracker than it is only useful if someone takes the time to close/structure things and lower the amount of tickets. If you are using a source code management somebody needs to cleanup here and restructure the repository quite often. If you are programming than there should be people caring about refactoring of the lazy produced stuff of others. It is part of most of the aspects some will face while doing software development.

Everybody agrees to the necessity of this kind of management. And it is always the first thing that is skipped!


It's fine if you don't know. But you're fired if you can't even google it.

Internet is a tool. It's not making you stupider if you're learning from it.


switch-case is not object oriented programming

I often see a lot of switch-case or awful big if-else constructs. This is merely a sign for not putting state where it belongs and don't use the real and efficient switch-case construct that is already there: method lookup/vtable


Classes should fit on the screen.

If you have to use the scroll bar to see all of your class, your class is too big.

Code folding and miniature fonts are cheating.


Automatic Updates Lead to Poorer Quality Software that is Less Secure

The Idea

A system to keep users' software up to date with the latest bug fixes and security patches.

The Reality

Products have to be shipped by fixed deadlines, often at the expense of QA. Software is then released with many bugs and security holes in order to meet the deadline in the knowledge that the 'Automatic Update' can be used to fix all the problems later.

Now, the piece of software that really made me think of this is VS2K5. At first, it was great, but as the updates were installed the software is slowly getting worse. The biggest offence was the loss of macros - I had spent a long time creating a set of useful VBA macros to automate some of the code I write - but apparently there was a security hole and instead of fixing it the macro system was disabled. Bang goes a really useful feature: recording keystrokes and repeated replaying of them.

Now, if I were really paranoid, I could see Automatic Updates as a way to get people to upgrade their software by slowly installing code that breaks the system more often. As the system becomes more unreliable, users are tempted to pay out for the next version with the promise of better reliablity and so on.

Skizz


Realizing sometimes good enough is good enough, is a major jump in your value as a programmer.

Note that when I say 'good enough', I mean 'good enough', not it's some crap that happens to work. But then again, when you are under a time crunch, 'some crap that happens to work', may be considered 'good enough'.


There's an awful lot of bad teaching out there.

We developers like to feel smugly superior when Joel says there's a part of the brain for understanding pointers that some people are just born without. The topics many of us discuss here and are passionate about are esoteric, but sometimes that's only because we make them so.


Opinion: Not having function definitions, and return types can lead to flexible and readable code.

This opinion probably applies more to interpreted languages than compiled. Requiring a return type, and a function argument list, are great for things like intellisense to auto document your code, but they are also restrictions.

Now don't get me wrong, I am not saying throw away return types, or argument lists. They have their place. And 90% of the time they are more of a benefit than a hindrance.

There are times and places when this is useful.


Junior programmers should be assigned to doing object/ module design and design maintenance for several months before they are allowed to actually write or modify code.

Too many programmers/developers make it to the 5 and 10 year marks without understanding the elements of good design. It can be crippling later when they want to advance beyond just writing and maintaining code.


The worst thing about recursion is recursion.


If a developer cannot write clear, concise and grammatically correct comments then they should have to go back and take English 101.

We have developers and (the horror) architects who cannot write coherently. When their documents are reviewed they say things like "oh, don't worry about grammatical errors or spelling - that's not important". Then they wonder why their convoluted garbage documents become convoluted buggy code.

I tell the interns that I mentor that if you can't communicate your great ideas verbally or in writing you may as well not have them.


Believe it or not, my belief that, in an OO language, most of the (business logic) code that operates on a class's data should be in the class itself is heresy on my team.


A degree in Computer Science or other IT area DOES make you a more well rounded programmer

I don't care how many years of experience you have, how many blogs you've read, how many open source projects you're involved in. A qualification (I'd recommend longer than 3 years) exposes you to a different way of thinking and gives you a great foundation.

Just because you've written some better code than a guy with a BSc in Computer Science, does not mean you are better than him. What you have he can pick up in an instant which is not the case the other way around.

Having a qualification shows your commitment, the fact that you would go above and beyond experience to make you a better developer. Developers which are good at what they do AND have a qualification can be very intimidating.

I would not be surprized if this answer gets voted down.

Also, once you have a qualification, you slowly stop comparing yourself to those with qualifications (my experience). You realize that it all doesn't matter at the end, as long as you can work well together.

Always act mercifully towards other developers, irrespective of qualifications.


One I have been tossing around for a while:

The data is the system.

Processes and software are built for data, not the other way around.

Without data, the process/software has little value. Data still has value without a process or software around it.

Once we understand the data, what it does, how it interacts, the different forms it exists in at different stages, only then can a solution be built to support the system of data.

Successful software/systems/processes seem to have an acute awareness, if not fanatical mindfulness of "where" the data is at in any given moment.


The ability to create UML diagrams similar to pretzels with mad cow disease is not actually a useful software development skill.

The whole point of diagramming code is to visualise connections, to see the shape of a design. But once you pass a certain rather low level of complexity, the visualisation is too much to process mentally. Making connections pictorially is only simple if you stick to straight lines, which typically makes the diagram much harder to read than if the connections were cleverly grouped and routed along the cardinal directions.

Use diagrams only for broad communication purposes, and only when they're understood to be lies.


Realizing sometimes good enough is good enough, is a major jump in your value as a programmer.

Note that when I say 'good enough', I mean 'good enough', not it's some crap that happens to work. But then again, when you are under a time crunch, 'some crap that happens to work', may be considered 'good enough'.


Don't use inheritance unless you can explain why you need it.


Software Architects/Designers are Overrated

As a developer, I hate the idea of Software Architects. They are basically people that no longer code full time, read magazines and articles, and then tell you how to design software. Only people that actually write software full time for a living should be doing that. I don't care if you were the worlds best coder 5 years ago before you became an Architect, your opinion is useless to me.

How's that for controversial?

Edit (to clarify): I think most Software Architects make great Business Analysts (talking with customers, writing requirements, tests, etc), I simply think they have no place in designing software, high level or otherwise.


Performance does matter.


To Be A Good Programmer really requires working in multiple aspects of the field: Application development, Systems (Kernel) work, User Interface Design, Database, and so on. There are certain approaches common to all, and certain approaches that are specific to one aspect of the job. You need to learn how to program Java like a Java coder, not like a C++ coder and vice versa. User Interface design is really hard, and uses a different part of your brain than coding, but implementing that UI in code is yet another skill as well. It is not just that there is no "one" approach to coding, but there is not just one type of coding.


coding is not typing

It takes time to write the code. Most of the time in the editor window, you are just looking at the code, not actually typing. Not as often, but quite frequently, you are deleting what you have written. Or moving from one place to another. Or renaming.

If you are banging away at the keyboard for a long time you are doing something wrong.

Corollary: Number of lines of code written per day is not a linear measurement of a programmers productivity, as programmer that writes 100 lines in a day is quite likely a better programmer then the one that writes 20, but one that writes 5000 is certainly a bad programmer


Architects that do not code are useless.

That sounds a little harsh, but it's not unreasonable. If you are the "architect" for a system, but do not have some amount of hands-on involvement with the technologies employed then how do you get the respect of the development team? How do you influence direction?

Architects need to do a lot more (meet with stakeholders, negotiate with other teams, evaluate vendors, write documentation, give presentations, etc.) But, if you never see code checked in from by your architect... be wary!


How about this one:

Garbage collectors actually hurt programmers' productivity and make resource leaks harder to find and fix

Note that I am talking about resouces in general, and not only memory.


Making software configurable is a bad idea.

Configurable software allows the end-user (or admin etc) to choose too many options, which may not all have been tested together (or rather, if there are more than a very small number, I can guarantee will not have been tested).

So I think software which has its configuration hard-coded (but not necessarily shunning constants etc) to JUST WORK is a good idea. Run with sensible defaults, and DO NOT ALLOW THEM TO BE CHANGED.

A good example of this is the number of configuration options on Google Chrome - however, this is probably still too many :)


Newer languages, and managed code do not make a bad programmer better.


Any sufficiently capable library is too complicated to be useable and any library simple enough to be usable lacks that capabilities needed to be a good general solution.

I run in to this constantly. Exhaustive libraries that are so complicated to use I tear my hair out and simple easy to use libraries that don't quite do what I need them to do.


Understanding "what" to do is at least as important as knowing "how" to do it, and almost always it's much more important than knowing the 'best' way to solve a problem. Domain-specific knowledge is often crucial to write good software.


Not all programmers are created equal

Quite often managers think that DeveloperA == DeveloperB simply because they have same level of experience and so on. In actual fact, the performance of one developer can be 10x or even 100x that of another.

It's politically risky to talk about it, but sometimes I feel like pointing out that, even though several team members may appear to be of equal skill, it's not always the case. I have even seen cases where lead developers were 'beyond hope' and junior devs did all the actual work - I made sure they got the credit, though. :)


Rob Pike wrote: "Data dominates. If you've chosen the right data structures and organized things well, the algorithms will almost always be self-evident. Data structures, not algorithms, are central to programming."

And since these days any serious data is in the millions of records, I content that good data modeling is the most important programming skill (whether using a rdbms or something like sqlite or amazon simpleDB or google appengine data storage.)

Fancy search and sorting algorithms aren't needed any more when the data, all the data, is stored in such a data storage system.


Not very controversial AFAIK but... AJAX was around way before the term was coined and everyone needs to 'let it go'. People were using it for all sorts of things. No one really cared about it though.

Then suddenly POW! Someone coined the term and everyone jumped on the AJAX bandwagon. Suddenly people are now experts in AJAX, as if 'experts' in dynamically loading data weren't around before. I think its one of the biggest contributing factors that is leading to the brutal destruction of the internet. That and "Web 2.0".


Opinion: most code out there is crappy, because that's what the programmers WANT it to be.

Indirectly, we have been nurturing a culture of extreme creativeness. It's not that I don't think problem solving has creative elements -- it does -- it's just that it's not even remotely the same as something like painting (see Paul Graham's famous "Hackers and Painters" essay).

If we bend our industry towards that approach, ultimately it means letting every programmer go forth and whack out whatever highly creative, crazy stuff they want. Of course, for any sizable project, trying to put together dozens of unrelated, unstructured, unplanned bits into one final coherent bit won't work by definition. That's not a guess, or an estimate, it's the state of the industry that we face today. How many times have you seen sub-bits of functionality in a major program that were completely inconsistent with the rest of the code? It's so common now, it's a wonder anyone cause use any of these messes.

Convoluted, complicated, ugly stuff that just keeps getting worse and more unstable. If we were building something physical, everyone on the planet would call us out on how horribly ugly and screwed up the stuff is, but because it more or less hidden by being virtual, we are able to get away with some of the worst manufacturing processing that our species will ever see. (Can you imagine a car where four different people designed the four different wheels, in four different ways?)

But the sad part, the controversial part of it all, is that there is absolutely NO reason for it to be this way, other than historically the culture was towards more freedom and less organization, so we stayed that way (and probably got a lot worse). Software development is a joke, but it's a joke because that's what the programmers want it to be (but would never in a million years admit that it was true, a "plot by management" is a better reason for most people).

How long will we keep shooting ourselves in the foot, before we wake up and realize that we the ones holding the gun, pointing it and also pulling the trigger?

Paul.


Don't use stored procs in your database.

The reasons they were originally good - security, abstraction, single connection - can all be done in your middle tier with ORMs that integrate lots of other advantages.

This one is definitely controversial. Every time I bring it up, people tear me apart.


Once i saw the following from a co-worker:

equal = a.CompareTo(b) == 0;

I stated that he cannot assume that in a general case, but he just laughed.


Extension Methods are the work of the Devil

Everyone seems to think that extension methods in .Net are the best thing since sliced bread. The number of developers singing their praises seems to rise by the minute but I'm afraid I can't help but despise them and unless someone can come up with a brilliant justification or example that I haven't already heard then I will never write one. I recently came across this thread and I must say reading the examples of the highest voted extensions made me feel a little like vomiting (metaphorically of course).

The main reasons given for their extensiony goodness are increased readability, improved OO-ness and the ability to chain method calls better.

I'm afraid I have to differ, I find in fact that they, unequivocally, reduce readability and OO-ness by virtue of the fact that they are at their core a lie. If you need a utility method that acts upon an object then write a utility method that acts on that object don't lie to me. When I see aString.SortMeBackwardsUsingKlingonSortOrder then string should have that method because that is telling me something about the string object not something about the AnnoyingNerdReferences.StringUtilities class.

LINQ was designed in such a way that chained method calls are necessary to avoid strange and uncomfortable expressions and the extension methods that arise from LINQ are understandable but in general chained method calls reduce readability and lead to code of the sort we see in obfuscated Perl contests.

So, in short, extension methods are evil. Cast off the chains of Satan and commit yourself to extension free code.


C++ is future killer language...

... of dynamic languages.

nobody owns it, has a growing set of features like compile-time (meta-)programming or type inference, callbacks without the overhead of function calls, doesn't enforce a single approach (multi-paradigm). POSIX and ECMAScript regular expressions. multiple return values. you can have named arguments. etc etc.

things move really slowly in programming. it took JavaScript 10 years to get off the ground (mostly because of performance), and most of people who program in it still don't get it (classes in JS? c'mon!). i'd say C++ will really start shining in 15-20 years from now. that seems to me like about the right amount of time for C++ (the language as well as compiler vendors) and critical mass of programmers who today write in dynamic languages to converge.

C++ needs to become more programmer-friendly (compiler errors generated from templates or compile times in the presence of same), and the programmers need to realize that static typing is a boon (it's already in progress, see other answer here which asserts that good code written in a dynamically typed language is written as if the language was statically typed).


Singletons are not evil

There is a place for singletons in the real world, and methods to get around them (i.e. monostate pattern) are simply singletons in disguise. For instance, a Logger is a perfect candidate for a singleton. Addtionally, so is a message pump. My current app uses distributed computing, and different objects need to be able to send appropriate messages. There should only be one message pump, and everyone should be able to access it. The alternative is passing an object to my message pump everywhere it might be needed and hoping that a new developer doesn't new one up without thinking and wonder why his messages are going nowhere. The uniqueness of the singleton is the most important part, not its availability. The singleton has its place in the world.


"Good Coders Code and Great Coders Reuse It" This is happening right now But "Good Coder" is the only ONE who enjoy that code. and "Great Coders" are for only to find out the bug in to that because they don't have the time to think and code. But they have time for find the bug in that code.

so don't criticize!!!!!!!!

Create your own code how YOU want.


Every developer should be familiar with the basic architecture of modern computers. This also applies to developers who target a virtual machine (maybe even more so, because they have been told time and time again that they don't need to worry themselves with memory management etc.)


"else" is harmful.


One I have been tossing around for a while:

The data is the system.

Processes and software are built for data, not the other way around.

Without data, the process/software has little value. Data still has value without a process or software around it.

Once we understand the data, what it does, how it interacts, the different forms it exists in at different stages, only then can a solution be built to support the system of data.

Successful software/systems/processes seem to have an acute awareness, if not fanatical mindfulness of "where" the data is at in any given moment.


Keep your business logic out of the DB. Or at a minimum, keep it very lean. Let the DB do what it's intended to do. Let code do what code is intended to do. Period.

If you're a one man show (basically, arrogant & egotistical, not listening to the wisdom of others just because you're in control), do as you wish. I don't believe you're that way since you're asking to begin with. But I've met a few when it comes to this subject and felt the need to specify.

If you work with DBA's but do your own DB work, keep clearly defined partitions between your business objects, the gateway between them and the DB, and the DB itself.

If you work with DBA's and aren't allowed to do your DB work (either by policy or because they're premadonnas), you're very close to being a fool placing your reliance on them to get anything done by putting code-dependant business logic in your DB entities (sprocs, functions, etc.).

If you're a DBA, make developers keep their DB entities clean & lean.


I also think there's nothing wrong with having binaries in source control.. if there is a good reason for it. If I have an assembly I don't have the source for, and might not necessarily be in the same place on each devs machine, then I will usually stick it in a "binaries" directory and reference it in a project using a relative path.

Quite a lot of people seem to think I should be burned at the stake for even mentioning "source control" and "binary" in the same sentence. I even know of places that have strict rules saying you can't add them.


Developers overuse databases

All too often, developers store data in a DBMS that should be in code or in file(s). I've seen a one-column-one-row table that stored the 'system password' (separate from the user table.) I've seen constants stored in databases. I've seen databases that would make a grown coder cry.

There is some sort of mystical awe that the offending coders have of the DBMS--the database can do anything, but they don't know how it works. DBAs practice a black art. It also allows responsibility transference: "The database is too slow," "The database did it" and other excuses are common.

Left unchecked, these coders go on develop databases-within-databases, systems-within-systems. (There is a name to this anti-pattern, but I forget what it is.)


There is only one design pattern: encapsulation

For example:

  • Factory method: you've encapsulated object creation
  • Strategy: you encapsulated different changeable algorithms
  • Iterator: you encapsulated the way to sequentially access the elements in the collection.

If you only know one language, no matter how well you know it, you're not a great programmer.

There seems to be an attitude that says once you're really good at C# or Java or whatever other language you started out learning then that's all you need. I don't believe it- every language I have ever learned has taught me something new about programming that I have been able to bring back into my work with all the others. I think that anyone who restricts themselves to one language will never be as good as they could be.

It also indicates to me a certain lack of inquistiveness and willingness to experiment that doesn't necessarily tally with the qualities I would expect to find in a really good programmer.


Programmers should avoid method hiding through inheritance at all costs.

In my experience, virtually every place I have ever seen inherited method hiding used it has caused problems. Method hiding results in objects behaving differently when accessed through a base type reference vs. a derived type reference - this is generally a Bad Thing. While many programmers are not formally aware of it, most intuitively expect that objects will adhere to the Liskov Substitution Principle. When objects violate this expectation, many of the assumptions inherent to object-oriented systems can begin to fray. The most egregious cases I've seen is when the hidden method alters the state of the object instance. In these cases, the behavior of the object can change in subtle ways that are difficult to debug and diagnose.

Ok, so there may be some infrequent cases where method hiding is actually useful and beneficial - like emulating return type covariance of methods in languages that don't support it. But the vast majority of time, when developers use method hiding it is either out of ignorance (or accident) or as a way to hack around some problem that probably deserves better design treatment. In general, the beneficial cases I've seen of method hiding (not to say there aren't others) is when a side-effect free method that returns some information is hidden by one that computes something more applicable to the calling context.

Languages like C# have improved things a bit by requiring the new keyword on methods that hide a base class method - at least helping avoid involuntary use of method hiding. But I find that many people still confuse the meaning of new with that of override - particularly since in simple scenarios their behavior can appear identical. It would be nice if tools like FxCop actually had built-in rules for identifying potentially bad usage of method hiding.

By the way, method hiding through inheritance should not be confused with other kinds of hiding - such as through nesting - which I believe is a valid and useful construct with fewer potential problems.


Write small methods. It seems that programmers love to write loooong methods where they do multiple different things.

I think that a method should be created wherever you can name one.


Garbage collection is overrated

Many people consider the introduction of garbage collection in Java one of the biggest improvements compared to C++. I consider the introduction to be very minor at best, well written C++ code does all the memory management at the proper places (with techniques like RAII), so there is no need for a garbage collector.


Opinion: Never ever have different code between "debug" and "release" builds

The main reason being that release code almost never gets tested. Better to have the same code running in test as it is in the wild.


There are some (very few) legitimate uses for goto (particularly in C, as a stand-in for exception handling).


Separation of concerns is evil :)

Only separate concerns if you have good reason for it. Otherwise, don't separate them.

I have encountered too many occasions of separation only for the sake of separation. The second half of Dijkstra's statement "Minimal coupling, maximal cohesion" should not be forgotten. :)

Happy to discuss this further.


Sometimes jumping on the bandwagon is ok

I get tired of people exhibiting "grandpa syndrome" ("You kids and your newfangled Test Driven Development. Every big technology that's come out in the last decade has sucked. Back in my day, we wrote real code!"... you get the idea).

Sometimes things that are popular are popular for a reason.


Sometimes you have to denormalize your databases.

An opinion that doesn't go well with most programmers but you have to sacrifice things like noramlization for performance sometimes.


Don't worry too much about what language to learn, use the industry heavy weights like c# or python. Languages like Ruby are fun in the bedroom, but don't do squat in workplace scenarios. Languages like c# and Java can handle small to the very large software projects. If anyone says otherwise, then your talking about a scripting language. Period!

Before starting a project, consider how much support and code samples are available on the net. Again, choosing a language like Ruby which has very few code samples on the web compared to Java for example, will only cause you grief further down the road when your stuck on a problem.

You can't post a message on a forum and expect an answer back while your boss is asking you how your coding is going. What are you going to say? "I'm waiting for someone to help me out on this forum"

Learn one language and learn it good. Learning multiple languages may carry over skills and practices, but you'll only even be OK at all of them. Be good at one. There are entire books dedicated to Threading in Java which, when you think about it, is only one namespace out of over 100.

Master one or be ok at lots.


Software Development is a VERY small subset of Computer Science.

People sometimes seem to think the two are synonymous, but in reality there are so many aspects to computer science that the average developer rarely (if ever) gets exposed to. Depending on one's career goals, I think there are a lot of CS graduates out there who would probably have been better off with some sort of Software Engineering education.

I value education highly, have a BS in Computer science and am pursuing a MS in it part time, but I think that many people who obtain these degrees treat the degree as a means to an end and benefit very little. I know plenty of people who took the same Systems Software course I took, wrote the same assembler I wrote, and to this day see no value in what they did.


A random collection of Cook's aphorisms...

  • The hardest language to learn is your second.

  • The hardest OS to learn is your second one - especially if your first was an IBM mainframe.

  • Once you've learned several seemingly different languages, you finally realize that all programming languages are the same - just minor differences in syntax.

  • Although one can be quite productive and marketable without having learned any assembly, no one will ever have a visceral understanding of computing without it.

  • Debuggers are the final refuge for programmers who don't really know what they're doing in the first place.

  • No OS will ever be stable if it doesn't make use of hardware memory management.

  • Low level systems programming is much, much easier than applications programming.

  • The programmer who has a favorite language is just playing.

  • Write the User's Guide FIRST!

  • Policy and procedure are intended for those who lack the initiative to perform otherwise.

  • (The Contractor's Creed): Tell'em what they need. Give'em what they want. Make sure the check clears.

  • If you don't find programming fun, get out of it or accept that although you may make a living at it, you'll never be more than average.

  • Just as the old farts have to learn the .NET method names, you'll have to learn the library calls. But there's nothing new there.
    The life of a programmer is one of constantly adapting to different environments, and the more tools you have hung on your belt, the more versatile and marketable you'll be.

  • You may piddle around a bit with little code chunks near the beginning to try out some ideas, but, in general, one doesn't start coding in earnest until you KNOW how the whole program or app is going to be layed out, and you KNOW that the whole thing is going to work EXACTLY as advertised. For most projects with at least some degree of complexity, I generally end up spending 60 to 70 percent of the time up front just percolating ideas.

  • Understand that programming has little to do with language and everything to do with algorithm. All of those nifty geegaws with memorable acronyms that folks have come up with over the years are just different ways of skinning the implementation cat. When you strip away all the OOPiness, RADology, Development Methodology 37, and Best Practice 42, you still have to deal with the basic building blocks of:

    • assignments
    • conditionals
    • iterations
    • control flow
    • I/O

Once you can truly wrap yourself around that, you'll eventually get to the point where you see (from a programming standpoint) little difference between writing an inventory app for an auto parts company, a graphical real-time TCP performance analyzer, a mathematical model of a stellar core, or an appointments calendar.

  • Beginning programmers work with small chunks of code. As they gain experience, they work with ever increasingly large chunks of code.
    As they gain even more experience, they work with small chunks of code.

Opinion: explicit variable declaration is a great thing.

I'll never understand the "wisdom" of letting the developer waste costly time tracking down runtime errors caused by variable name typos instead of simply letting the compiler/interpreter catch them.

Nobody's ever given me an explanation better than "well it saves time since I don't have to write 'int i;'." Uhhhhh... yeah, sure, but how much time does it take to track down a runtime error?


That best practices are a hazard because they ask us to substitute slogans for thinking.


I've been burned for broadcasting these opinions in public before, but here goes:

Well-written code in dynamically typed languages follows static-typing conventions

Having used Python, PHP, Perl, and a few other dynamically typed languages, I find that well-written code in these languages follows static typing conventions, for example:

  • Its considered bad style to re-use a variable with different types (for example, its bad style to take a list variable and assign an int, then assign the variable a bool in the same method). Well-written code in dynamically typed languages doesn't mix types.

  • A type-error in a statically typed language is still a type-error in a dynamically typed language.

  • Functions are generally designed to operate on a single datatype at a time, so that a function which accepts a parameter of type T can only sensibly be used with objects of type T or subclasses of T.

  • Functions designed to operator on many different datatypes are written in a way that constrains parameters to a well-defined interface. In general terms, if two objects of types A and B perform a similar function, but aren't subclasses of one another, then they almost certainly implement the same interface.

While dynamically typed languages certainly provide more than one way to crack a nut, most well-written, idiomatic code in these languages pays close attention to types just as rigorously as code written in statically typed languages.

Dynamic typing does not reduce the amount of code programmers need to write

When I point out how peculiar it is that so many static-typing conventions cross over into dynamic typing world, I usually add "so why use dynamically typed languages to begin with?". The immediate response is something along the lines of being able to write more terse, expressive code, because dynamic typing allows programmers to omit type annotations and explicitly defined interfaces. However, I think the most popular statically typed languages, such as C#, Java, and Delphi, are bulky by design, not as a result of their type systems.

I like to use languages with a real type system like OCaml, which is not only statically typed, but its type inference and structural typing allow programmers to omit most type annotations and interface definitions.

The existence of the ML family of languages demostrates that we can enjoy the benefits of static typing with all the brevity of writing in a dynamically typed language. I actually use OCaml's REPL for ad hoc, throwaway scripts in exactly the same way everyone else uses Perl or Python as a scripting language.


Excessive HTML in PHP files: sometimes necessary

Excessive Javascript in PHP files: trigger the raptor attack

While I have a hard time figuring out all your switching between echoing and ?>< ?php 'ing html (after all, php is just a processor for html), lines and lines of javascript added in make it a completely unmaintainable mess.

People have to grasp this: They are two separate programming languages. Pick one to be your primary language. Then go on and find a quick, clean and easily maintainable way to make your primary include the secondary language.

The reason why you jump between PHP, Javascript and HTML all the time is because you are bad at all three of them.

Ok, maybe its not exactly controversial. I had the impression this was a general frustration venting topic :)


Readability is the most important aspect of your code.

Even more so than correctness. If it's readable, it's easy to fix. It's also easy to optimize, easy to change, easy to understand. And hopefully other developers can learn something from it too.


90 percent of programmers are pretty damn bad programmers, and virtually all of us have absolutely no tools to evaluate our current ability level (although we can generally look back and realize how bad we USED to suck)

I wasn't going to post this because it pisses everyone off and I'm not really trying for a negative score or anything, but:

A) isn't that the point of the question, and

B) Most of the "Answers" in this thread prove this point

I heard a great analogy the other day: Programming abilities vary AT LEAST as much as sports abilities. How many of us could jump into a professional team and actually improve their chances?


BAD IDE's make the programming language weak

Good programming IDEs really make working with certain languages easier and better to oversee. I have been bit spoiled in my professional carreer, the companies I worked for always had the latest Visual Studio's ready to use.

For about 8 months, I have been doing a lot of Cocoa next to my work and the Xcode editor makes working with that language just way too difficult. Overloads are difficult to find and the overal way of handling open files just makes your screen really messy, really fast. It's really a shame, because Cocoa is a cool and powerful language to work with.

Ofcourse die-hard Xcode fans will now vote down my post, but there are so many IDEs that are really a lot better.

People making a switch to IT, who just shouldn't

This is a copy/paste from a blog post of mine, made last year.


The experiences I have are mainly about the dutch market, but they also might apply to any other market.

We (as I group all Software Engineers together) are currently in a market that might look very good for us. Companies are desperately trying to get Software Engineers (from now on SE) , no matter the price. If you switch jobs now, you can demand almost anything you want. In the Netherlands there is a trend now to even give 2 lease cars with a job, just to get you to work for them. How weird is that? How am I gonna drive 2 cars at the same time??

Of course this sounds very good for us, but this also creates a very unhealthy situation..

For example: If you are currently working for a company which is growing fast and you are trying to attract more co-workers, to finally get some serious software development from the ground, there is no-one to be found without offering sky high salaries. Trying to find quality co-workers is very hard. A lot of people are attracted to our kind of work, because of the good salaries, but this also means that a lot of people without the right passion are entering our market.

Passion, yes, I think that is the right word. When you have passion for your job, your job won’t stop at 05:00 PM. You will keep refreshing all of your development RSS feeds all night. You will search the internet for the latest technologies that might be interesting to use at work. And you will start about a dozen new ‘promising’ projects a month, just to see if you can master that latest technology you just read about a couple of weeks ago (and find an useful way of actually using that technology).

Without that passion, the market might look very nice (because of the cars, money and of course the hot girls we attract), but I don’t think it will be that interesting very long as, let’s say: fireman or fighter-pilot.

It might sound that I am trying to protect my own job here and partly that is true. But I am also trying to protect myself against the people I don’t want to work with. I want to have heated discussions about stuff I read about. I want to be able to spar with people that have the same ‘passion’ for the job as I have. I want colleagues that are working with me for the right reasons.

Where are those people I am looking for!!


That most language proponents make a lot of noise.


Code layout does matter

Maybe specifics of brace position should remain purely religious arguments - but it doesn't mean that all layout styles are equal, or that there are no objective factors at all!

The trouble is that the uber-rule for layout, namely: "be consistent", sound as it is, is used as a crutch by many to never try to see if their default style can be improved on - and that, furthermore, it doesn't even matter.

A few years ago I was studying Speed Reading techniques, and some of the things I learned about how the eye takes in information in "fixations", can most optimally scan pages, and the role of subconsciously picking up context, got me thinking about how this applied to code - and writing code with it in mind especially.

It led me to a style that tended to be columnar in nature, with identifiers logically grouped and aligned where possible (in particular I became strict about having each method argument on its own line). However, rather than long columns of unchanging structure it's actually beneficial to vary the structure in blocks so that you end up with rectangular islands that the eye can take in in a single fixture - even if you don't consciously read every character.

The net result is that, once you get used to it (which typically takes 1-3 days) it becomes pleasing to the eye, easier and faster to comprehend, and is less taxing on the eyes and brain because it's laid out in a way that makes it easier to take in.

Almost without exception, everyone I have asked to try this style (including myself) initially said, "ugh I hate it!", but after a day or two said, "I love it - I'm finding it hard not to go back and rewrite all my old stuff this way!".

I've been hoping to find the time to do more controlled experiments to collect together enough evidence to write a paper on, but as ever have been too busy with other things. However this seemed like a good opportunity to mention it to people interested in controversial techniques :-)

[Edit]

I finally got around to blogging about this (after many years parked in the "meaning to" phase): Part one, Part two, Part three.


Assembly is the best first programming language.


Pagination is never what the user wants

If you start having the discussion about where to do pagination, in the database, in the business logic, on the client, etc. then you are asking the wrong question. If your app is giving back more data than the user needs, figure out a way for the user to narrow down what they need based on real criteria, not arbitrary sized chunks. And if the user really does want all those results, then give them all the results. Who are you helping by giving back 20 at a time? The server? Is that more important than your user?

[EDIT: clarification, based on comments]

As a real world example, let's look at this Stack Overflow question. Let's say I have a controversial programming opinion. Before I post, I'd like to see if there is already an answer that addresses the same opinion, so I can upvote it. The only option I have is to click through every page of answers.

I would prefer one of these options:

  1. Allow me to search through the answers (a way for me to narrow down what I need based on real criteria).

  2. Allow me to see all the answers so I can use my browser's "find" option (give me all the results).

The same applies if I just want to find an answer I previously read, but can't find anymore. I don't know when it was posted or how many votes it has, so the sorting options don't help. And even if I did, I still have to play a guessing game to find the right page of results. The fact that the answers are paginated and I can directly click into one of a dozen pages is no help at all.

--
bmb


MVC for the web should be far simpler than traditional MVC.

Traditional MVC involves code that "listens" for "events" so that the view can continually be updated to reflect the current state of the model. In the web paradigm however, the web server already does the listening, and the request is the event. Therefore MVC for the web need only be a specific instance of the mediator pattern: controllers mediating between views and the model. If a web framework is crafted properly, a re-usable core should probably not be more than 100 lines. That core need only implement the "page controller" paradigm but should be extensible so as to be able to support the "front controller" paradigm.

Below is a method that is the crux of my own framework, used successfully in an embedded consumer device manufactured by a Fortune 100 network hardware manufacturer, for a Fortune 50 media company. My approach has been likened to Smalltalk by a former Smalltalk programmer and author of an Oreilly book about the most prominent Java web framework ever; furthermore I have ported the same framework to mod_python/psp.

static function sendResponse(IBareBonesController $controller) {
  $controller->setMto($controller->applyInputToModel());
  $controller->mto->applyModelToView();
}

My controversial opinion: Object Oriented Programming is absolutely the worst thing that's ever happened to the field of software engineering.

The primary problem with OOP is the total lack of a rigorous definition that everyone can agree on. This easily leads to implementations that have logical holes in them, or language like Java that adhere to this bizarre religious dogma about what OOP means, while forcing the programmer into doing all these contortions and "design patterns" just to work around the limitations of a particular OOP system.

So, OOP tricks the programmer into thinking they're making these huge productivity gains, that OOP is somehow a "natural" way to think, while forcing the programmer to type boatloads of unnecessary boilerplate.

Then since nobody knows what OOP actually means, we get vast amounts of time wasted on petty arguments about whether language X or Y is "truly OOP" or not, what bizarre cargo cultish language features are absolutely "essential" for a language to be considered "truly OOP".

Instead of demanding that this language or that language be "truly oop", we should be looking at what language features are shown by experiment, to actually increase productivity, instead of trying to force it into being some imagined ideal language, or indeed forcing our programs to conform to some platonic ideal of a "truly object oriented program".

Instead of insisting that our programs conform to some platonic ideal of "Truly object oriented", how about we focus on adhering to good engineering principles, making our code easy to read and understand, and using the features of a language that are productive and helpful, regardless of whether they are "OOP" enough or not.


Opinion: explicit variable declaration is a great thing.

I'll never understand the "wisdom" of letting the developer waste costly time tracking down runtime errors caused by variable name typos instead of simply letting the compiler/interpreter catch them.

Nobody's ever given me an explanation better than "well it saves time since I don't have to write 'int i;'." Uhhhhh... yeah, sure, but how much time does it take to track down a runtime error?


Don't worry too much about what language to learn, use the industry heavy weights like c# or python. Languages like Ruby are fun in the bedroom, but don't do squat in workplace scenarios. Languages like c# and Java can handle small to the very large software projects. If anyone says otherwise, then your talking about a scripting language. Period!

Before starting a project, consider how much support and code samples are available on the net. Again, choosing a language like Ruby which has very few code samples on the web compared to Java for example, will only cause you grief further down the road when your stuck on a problem.

You can't post a message on a forum and expect an answer back while your boss is asking you how your coding is going. What are you going to say? "I'm waiting for someone to help me out on this forum"

Learn one language and learn it good. Learning multiple languages may carry over skills and practices, but you'll only even be OK at all of them. Be good at one. There are entire books dedicated to Threading in Java which, when you think about it, is only one namespace out of over 100.

Master one or be ok at lots.


Debuggers should be forbidden. This would force people to write code that is testable through unit tests, and in the end would lead to much better code quality.

Remove Copy & Paste from ALL programming IDEs. Copy & pasted code is very bad, this option should be completely removed. Then the programmer will hopefully be too lazy to retype all the code so he makes a function and reuses the code.

Whenever you use a Singleton, slap yourself. Singletons are almost never necessary, and are most of the time just a fancy name for a global variable.


I believe in the Zen of Python


My controversial opinion? Java doesn't suck but Java API's do. Why do java libraries insist on making it hard to do simple tasks? And why, instead of fixing the APIs, do they create frameworks to help manage the boilerplate code? This opinion can apply to any language that requires 10 or more lines of code to read a line from a file.


I have two:

Design patterns are sometimes a way for bad programmer to write bad code - "when you have a hammer - all the world looks like a nail" mentality. If there si something I hate to hear is two developers create design by patterns: "We should use command with facade ...".

There is no such thing as "premature optimization". You should profile and optimize the your code before you get to that point when it becomes too painful to do so.


Developers are all different, and should be treated as such.

Developers don't fit into a box, and shouldn't be treated as such. The best language or tool for solving a problem has just as much to do with the developers as it does with the details of the problem being solved.


Writing extensive specifications is futile.
It's pretty difficult to write correct programs, but compilers, debuggers, unit tests, testers etc. make it possible to detect and eliminate most errors. On the other hand, when you write specs with a comparable level of detail like a program (i.e. pseudocode, UML), you are mostly on your own. Consider yourself lucky if you have a tool that helps you get the syntax right.

Extensive specifications are most likely bug riddled.
The chance that the writer got it right at the first try is about the same like the chance that a similarily large program is bugfree without ever being tested. Peer reviews eliminate some bugs, just like code reviews do.


  • Xah Lee: actually has some pretty noteworthy and legitimate viewpoints if you can filter out all the invective, and rationally evaluate statements without agreeing (or disagreeing) based solely on the personality behind the statements. A lot of my "controversial" viewpoints have been echoed by him, and other notorious "trolls" who have criticized languages or tools I use(d) on a regular basis.

  • [Documentation Generators](http://en.wikipedia.or /wiki/Comparison_of_documentation_generators): ... the kind where the creator invented some custom-made especially-for-documenting-sourcecode roll-your-own syntax (including, but not limited to JavaDoc) are totally superfluous and a waste of time because:

    • 1) They are underused by the people who should be using them the most; and
    • 2) All of these mini-documentation-languages all of them could easily be replaced with YAML

Functional programming is NOT more intuitive or easier to learn than imperative programming.

There are many good things about functional programming, but I often hear functional programmers say it's easier to understand functional programming than imperative programming for people with no programming experience. From what I've seen it's the opposite, people find trivial problems hard to solve because they don't get how to manage and reuse their temporary results when you end up in a world without state.


Here's one which has seemed obvious to me for many years but is anathema to everyone else: it is almost always a mistake to switch off C (or C++) assertions with NDEBUG in 'release' builds. (The sole exceptions are where the time or space penalty is unacceptable).

Rationale: If an assertion fails, your program has entered a state which

  • has never been tested
  • the developer was unable to code a recovery strategy for
  • the developer has effectively documented as being inconceivable.

Yet somehow 'industry best practice' is that the thing should just muddle on and hope for the best when it comes to live runs with your customers' data.


Write small methods. It seems that programmers love to write loooong methods where they do multiple different things.

I think that a method should be created wherever you can name one.


I hate universities and institutes offering short courses for teaching programming to new comers. It is outright disgrace and contempt for the art1 and science of programming.

They start teaching C, Java, VB (disgusting) to the people without good grasp on hardware and fundamental principals of computers. The should first be taught about the MACHINE by books like Morris Mano's Computer System Architecture and then taught the concept of instructing machine to solve problems instead of etching semantics and syntax of one programming language.

Also I don't understand government schools, colleges teaching children basics of computers using commercial operating systems and softwares. At least in my country (India) not many students afford to buy operating systems and even discounted office suits let alone the development software juggernaut (compilers, IDEs etc). This prompts theft and piracy and make this act of copying and stealing software from their institutes' libraries a justified act.

Again they are taught to use some products not the fundamental ideas.

Think about it if you were taught only that 2x2 is 4 and not the concept of multiplication?

Or if you were taught now to measure the length of pole inclined to some compound wall of your school but not the Pythagoras theorem


Procedural programming is fun. OOP is boring.


Developers overuse databases

All too often, developers store data in a DBMS that should be in code or in file(s). I've seen a one-column-one-row table that stored the 'system password' (separate from the user table.) I've seen constants stored in databases. I've seen databases that would make a grown coder cry.

There is some sort of mystical awe that the offending coders have of the DBMS--the database can do anything, but they don't know how it works. DBAs practice a black art. It also allows responsibility transference: "The database is too slow," "The database did it" and other excuses are common.

Left unchecked, these coders go on develop databases-within-databases, systems-within-systems. (There is a name to this anti-pattern, but I forget what it is.)


Programming is so easy a five year old can do it.

Programming in and of itself is not hard, it's common sense. You are just telling a computer what to do. You're not a genius, please get over yourself.


I think its fine to use goto-statements, if you use them in a sane way (and a sane programming language). They can often make your code a lot easier to read and don't force you to use some twisted logic just to get one simple thing done.


If your text editor doesn't do good code completion, you're wasting everyone's time.

Quickly remembering thousands of argument lists, spellings, and return values (not to mention class structures and similarly complex organizational patterns) is a task computers are good at and people (comparatively) are not. I buy wholeheartedly that slowing yourself down a bit and avoiding the gadget/feature cult is a great way to increase efficiency and avoid bugs, but there is simply no benefit to spending 30 seconds hunting unnecessarily through sourcecode or docs when you could spend nil... especially if you just need a spelling (which is more often than we like to admit).

Granted, if there isn't an editor that provides this functionality for your language, or the task is simple enough to knock out in the time it would take to load a heavier editor, nobody is going to tell you that Eclipse and 90 plugins is the right tool. But please don't tell me that the ability to H-J-K-L your way around like it's 1999 really saves you more time than hitting escape every time you need a method signature... even if you do feel less "hacker" doing it.

Thoughts?


Web applications suck

My Internet connection is veeery slow. My experience with almost every Web site that is not Google is, at least, frustrating. Why doesn't anybody write desktop apps anymore? Oh, I see. Nobody wants to be bothered with learning how operating systems work. At least, not Windows. The last time you had to handle WM_PAINT, your head exploded. Creating a worker thread to perform a long task (I mean, doing it the Windows way) was totally beyond you. What the hell was a callback? Oh, my God!


Garbage collection sucks

No, it actually doesn't. But it makes the programmers suck like nothing else. In college, the first language they taught us was Visual Basic (the original one). After that, there was another course where the teachers pretended they taught us C++. But the damage was done. Nobody actually knew how to use this esoteric keyword delete did. After testing our programs, we either got invalid address exceptions or memory leaks. Sometimes, we got both. Among the 1% of my faculty who can actually program, only one who can manage his memory by himself (at least, he pretends) and he's writing this rant. The rest write their programs in VB.NET, which, by definition, is a bad language.


Dynamic typing suck

Unless you're using assembler, of course (that's the kind of dynamic typing that actually deserves praise). What I meant is the overhead imposed by dynamic, interpreted languages makes them suck. And don't come with that silly argument that different tools are good for different jobs. C is the right language for almost everything (it's fast, powerful and portable), and, when it isn't (it's not fast enough), there's always inline assembly.


I might come up with more rants, but that will be later, not now.


Not everything needs to be encapsulated into its own method. Some times it is ok to have a method do more then one thing.


Copy/Paste IS the root of all evil.


I work in ASP.NET / VB.NET a lot and find ViewState an absolute nightmare. It's enabled by default on the majority of fields and causes a large quantity of encoded data at the start of every web page. The bigger a page gets in terms of controls on a page, the larger the ViewState data will become. Most people don't turn an eye to it, but it creates a large set of data which is usually irrelevant to the tasks being carried on the page. You must manually disable this option on all ASP controls if they're not being used. It's either that or have custom controls for everything.

On some pages I work with, half of the page is made up of ViewState, which is a shame really as there's probably better ways of doing it.

That's just one small example I can think of in terms of language/technology opinions. It may be controversial.

By the way, you might want to edit voting on this thread, it could get quite heated by some ;)


We're software developers, not C/C#/C++/PHP/Perl/Python/Java/... developers.

After you've been exposed to a few languages, picking up a new one and being productive with it is a small task. That is to say that you shouldn't be afraid of new languages. Of course, there is a large difference between being productive and mastering a language. But, that's no reason to shy away from a language you've never seen. It bugs me when people say, "I'm a PHP developer." or when a job offer says, "Java developer". After a few years experience of being a developer, new languages and APIs really shouldn't be intimidating and going from never seeing a language to being productive with it shouldn't take very long at all. I know this is controversial but it's my opinion.


Opinion: developers should be testing their own code

I've seen too much crap handed off to test only to have it not actually fix the bug in question, incurring communication overhead and fostering irresponsible practices.


Performance does matter.


The ability to create UML diagrams similar to pretzels with mad cow disease is not actually a useful software development skill.

The whole point of diagramming code is to visualise connections, to see the shape of a design. But once you pass a certain rather low level of complexity, the visualisation is too much to process mentally. Making connections pictorially is only simple if you stick to straight lines, which typically makes the diagram much harder to read than if the connections were cleverly grouped and routed along the cardinal directions.

Use diagrams only for broad communication purposes, and only when they're understood to be lies.


Programmers who don't code in their spare time for fun will never become as good as those that do.

I think even the smartest and most talented people will never become truly good programmers unless they treat it as more than a job. Meaning that they do little projects on the side, or just mess with lots of different languages and ideas in their spare time.

(Note: I'm not saying good programmers do nothing else than programming, but they do more than program from 9 to 5)


Respect the Single Responsibility Principle

At first glance you might not think this would be controversial, but in my experience when I mention to another developer that they shouldn't be doing everything in the page load method they often push back ... so for the children please quit building the "do everything" method we see all to often.


Globals and/or Singletons are not inherently evil

I come from more of a sysadmin, shell, Perl (and my "real" programming), PHP type background; last year I was thrown into a Java development gig.

Singletons are evil. Globals are so evil they are not even allowed. Yet, Java has things like AOP, and now various "Dependency Injection" frameworks (we used Google Guice). AOP less so, but DI things for sure give you what? Globals. Uhh, thanks.


Don't use inheritance unless you can explain why you need it.


Source Control: Anything But SourceSafe

Also: Exclusive locking is evil.

I once worked somewhere where they argued that exclusive locks meant that you were guaranteeing that people were not overwriting someone else's changes when you checked in. The problem was that in order to get any work done, if a file was locked devs would just change their local file to writable and merging (or overwriting) the source control with their version when they had the chance.


Most professional programmers suck

I have come across too many people doing this job for their living who were plain crappy at what they were doing. Crappy code, bad communication skills, no interest in new technology whatsoever. Too many, too many...


Opinion: There should not be any compiler warnings, only errors. Or, formulated differently You should always compile your code with -Werror.

Reason: Either the compiler thinks it is something which should be corrected, in case it should be an error, or it is not necessary to fix, in which case the compiler should just shut up.


Variable_Names_With_Bloody_Underscores

or even worse

CAPITALIZED_VARIABLE_NAMES_WITH_BLOODY_UNDERSCORES

should be globally expunged... with prejudice! CamelCapsAreJustFine. (Glolbal constants not withstanding)

GOTO statements are for use by developers under the age of 11

Any language that does not support pointers is not worthy of the name

.Net = .Bloat The finest example of microsoft's efforts for web site development (Expressionless Web 2) is the finest example of slow bloated cr@pw@re ever written. (try Web Studio instead)

Response: OK well let me address the Underscore issue a little. From the C link you provided:

-Global constants should be all caps with '_' separators. This I actually agree with because it is so BLOODY_OBVIOUS

-Take for example NetworkABCKey. Notice how the C from ABC and K from key are confused. Some people don't mind this and others just hate it so you'll find different policies in different code so you never know what to call something.

I fall into the former category. I choose names VERY carefully and if you cannot figure out in one glance that the K belongs to Key then english is probably not your first language.

  • C Function Names

    • In a C++ project there should be very few C functions.
    • For C functions use the GNU convention of all lower case letters with '_' as the word delimiter.

Justification

* It makes C functions very different from any C++ related names. 

Example

int some_bloody_function() { }

These "standards" and conventions are simply the arbitrary decisions handed down through time. I think that while they make a certain amount of logical sense, They clutter up code and make something that should be short and sweet to read, clumsy, long winded and cluttered.

C has been adopted as the de-facto standard, not because it is friendly, but because it is pervasive. I can write 100 lines of C code in 20 with a syntactically friendly high level language.

This makes the program flow easy to read, and as we all know, revisiting code after a year or more means following the breadcrumb trail all over the place.

I do use underscores but for global variables only as they are few and far between and they stick out clearly. Other than that, a well thought out CamelCaps() function/ variable name has yet to let me down!


SQL could and should have been done better. Because its original spec was limited, various venders have been extending the language in different directions for years. SQL that is written for MS-SQL is different than SQL for Oracle, IBM, MySQL, Sybase, etc. Other serious languages (take C++ for example) were carefully standardized so that C++ written under one compiler will generally compile unmodified under another. Why couldn't SQL have been designed and standardized better?

HTML was a seriously broken choice as a browser display language. We've spent years extending it through CSS, XHTML, Javascript, Ajax, Flash, etc. in order to make a useable UI, and the result is still not as good as your basic thick-client windows app. Plus, a competent web programmer now needs to know three or four languages in order to make a decent UI.

Oh yeah. Hungarian notation is an abomination.


Although I'm in full favor of Test-Driven Development (TDD), I think there's a vital step before developers even start the full development cycle of prototyping a solution to the problem.

We too often get caught up trying to follow our TDD practices for a solution that may be misdirected because we don't know the domain well enough. Simple prototypes can often elucidate these problems.

Prototypes are great because you can quickly churn through and throw away more code than when you're writing tests first (sometimes). You can then begin the development process with a blank slate but a better understanding.


Generated documentation is nearly always totally worthless.

Or, as a corollary: Your API needs separate sets of documentation for maintainers and users.

There are really two classes of people who need to understand your API: maintainers, who must understand the minutiae of your implementation to be effective at their job, and users, who need a high-level overview, examples, and thorough details about the effects of each method they have access to.

I have never encountered generated documentation that succeeded in either area. Generally, when programmers write comments for tools to extract and make documentation out of, they aim for somewhere in the middle--just enough implementation detail to bore and confuse users yet not enough to significantly help maintainers, and not enough overview to be of any real assistance to users.

As a maintainer, I'd always rather have clean, clear comments, unmuddled by whatever strange markup your auto-doc tool requires, that tell me why you wrote that weird switch statement the way you did, or what bug this seemingly-redundant parameter check fixes, or whatever else I need to know to actually keep the code clean and bug-free as I work on it. I want this information right there in the code, adjacent to the code it's about, so I don't have to hunt down your website to find it in a state that lends itself to being read.

As a user, I'd always rather have a thorough, well-organized document (a set of web pages would be ideal, but I'd settle for a well-structured text file, too) telling me how your API is architectured, what methods do what, and how I can accomplish what I want to use your API to do. I don't want to see internally what classes you wrote to allow me to do work, or files they're in for that matter. And I certainly don't want to have to download your source so I can figure out exactly what's going on behind the curtain. If your documentation were good enough, I wouldn't have to.

That's how I see it, anyway.


"Googling it" is okay!

Yes, I know it offends some people out there that their years of intense memorization and/or glorious stacks of programming books are starting to fall by the wayside to a resource that anyone can access within seconds, but you shouldn't hold that against people that use it.

Too often I hear googling answers to problems the result of criticism, and it really is without sense. First of all, it must be conceded that everyone needs materials to reference. You don't know everything and you will need to look things up. Conceding that, does it really matter where you got the information? Does it matter if you looked it up in a book, looked it up on Google, or heard it from a talking frog that you hallucinated? No. A right answer is a right answer.

What is important is that you understand the material, use it as the means to an end of a successful programming solution, and the client/your employer is happy with the results.

(although if you are getting answers from hallucinatory talking frogs, you should probably get some help all the same)


How about this one:

Garbage collectors actually hurt programmers' productivity and make resource leaks harder to find and fix

Note that I am talking about resouces in general, and not only memory.


Less code is better than more!

If the users say "that's it?", and your work remains invisible, it's done right. Glory can be found elsewhere.


Agile sucks.


"Everything should be made as simple as possible, but not simpler." - Einstein.


A degree in computer science does not---and is not supposed to---teach you to be a programmer.

Programming is a trade, computer science is a field of study. You can be a great programmer and a poor computer scientist and a great computer scientist and an awful programmer. It is important to understand the difference.

If you want to be a programmer, learn Java. If you want to be a computer scientist, learn at least three almost completely different languages. e.g. (assembler, c, lisp, ruby, smalltalk)


I don't know if it's really controversial, but how about this: Method and function names are the best kind of commentary your code can have; if you find yourself writing a comment, turn the the piece of of code you're commenting into a function/method.

Doing this has the pleasant side-effect of forcing you to decompose your program well, avoids having comments that can quickly become out of sync with reality, gives you something you can grep the codebase for, and leaves your code with a fresh lemon odour.


Storing XML in a CLOB in a relational database is often a horrible cop-out. Not only is it hideous in terms of performance, it shifts responsibility for correctly managing structure of the data away from the database architect and onto the application programmer.


Inheritance is evil and should be deprecated.

The truth is aggregation is better in all cases. Static typed OOP languages can't avoid inheritance, it's the only way to describe what method wants from a type. But dynamic languages and duck typing can live without it. Ruby mixins is much more powerful then inheritance and a lot more controllable.


Design Patterns are a symptom of Stone Age programming language design

They have their purpose. A lot of good software gets built with them. But the fact that there was a need to codify these "recipes" for psychological abstractions about how your code works/should work speaks to a lack of programming languages expressive enough to handle this abstraction for us.

The remedy, I think, lies in languages that allow you to embed more and more of the design into the code, by defining language constructs that might not exist or might not have general applicability but really really make sense in situations your code deals with incessantly. The Scheme people have known this for years, and there are things possible with Scheme macros that would make most monkeys-for-hire piss their pants.


Reflection has no place in production code

Reflection breaks static analysis including refactoring tools and static type checking. Reflection also breaks the normal assumptions developers have about code. For example: adding a method to a class (that doesn't shadow some other method in the class) should never have any effect, but when reflection is being used, some other piece of code may "discover" the new method and decide to call it. Actually determining if such code exists is intractable.

I do think it's fine to use reflection and tests and in code generators.

Yes, this does mean that I try to avoid frameworks that use reflection. (it's too bad that Java lacks proper compile-time meta-programming support)


It IS possible to secure your application.

Every time someone asks a question about how to either prevent users from pirating their app, or secure it from hackers, the answer is that it's impossible. Nonsense. If you truly believe that, then leave your doors unlocked (or just take them off the house!). And don't bother going to the doctor, either. You're mortal - trying to cure a sickness is just postponing the inevitable.

Just because someone might be able to pirate your app or hack your system doesn't mean you shouldn't try to reduce the number of people who will do so. What you're really doing is making it require more work to break in than the intruder/pirate is willing to do.

Just like a deadbolt and ADT on your house will keep the burglars out, reasonable anti-piracy and security measures will keep hackers and pirates out of your way. Of course, the more tempting it would be for them to break in, the more security you need.


Okay, I said I'd give a bit more detail on my "sealed classes" opinion. I guess one way to show the kind of answer I'm interested in is to give one myself :)

Opinion: Classes should be sealed by default in C#

Reasoning:

There's no doubt that inheritance is powerful. However, it has to be somewhat guided. If someone derives from a base class in a way which is completely unexpected, this can break the assumptions in the base implementation. Consider two methods in the base class, where one calls another - if these methods are both virtual, then that implementation detail has to be documented, otherwise someone could quite reasonably override the second method and expect a call to the first one to work. And of course, as soon as the implementation is documented, it can't be changed... so you lose flexibility.

C# took a step in the right direction (relative to Java) by making methods sealed by default. However, I believe a further step - making classes sealed by default - would have been even better. In particular, it's easy to override methods (or not explicitly seal existing virtual methods which you don't override) so that you end up with unexpected behaviour. This wouldn't actually stop you from doing anything you can currently do - it's just changing a default, not changing the available options. It would be a "safer" default though, just like the default access in C# is always "the most private visibility available at that point."

By making people explicitly state that they wanted people to be able to derive from their classes, we'd be encouraging them to think about it a bit more. It would also help me with my laziness problem - while I know I should be sealing almost all of my classes, I rarely actually remember to do so :(

Counter-argument:

I can see an argument that says that a class which has no virtual methods can be derived from relatively safely without the extra inflexibility and documentation usually required. I'm not sure how to counter this one at the moment, other than to say that I believe the harm of accidentally-unsealed classes is greater than that of accidentally-sealed ones.


My controversial opinion: Object Oriented Programming is absolutely the worst thing that's ever happened to the field of software engineering.

The primary problem with OOP is the total lack of a rigorous definition that everyone can agree on. This easily leads to implementations that have logical holes in them, or language like Java that adhere to this bizarre religious dogma about what OOP means, while forcing the programmer into doing all these contortions and "design patterns" just to work around the limitations of a particular OOP system.

So, OOP tricks the programmer into thinking they're making these huge productivity gains, that OOP is somehow a "natural" way to think, while forcing the programmer to type boatloads of unnecessary boilerplate.

Then since nobody knows what OOP actually means, we get vast amounts of time wasted on petty arguments about whether language X or Y is "truly OOP" or not, what bizarre cargo cultish language features are absolutely "essential" for a language to be considered "truly OOP".

Instead of demanding that this language or that language be "truly oop", we should be looking at what language features are shown by experiment, to actually increase productivity, instead of trying to force it into being some imagined ideal language, or indeed forcing our programs to conform to some platonic ideal of a "truly object oriented program".

Instead of insisting that our programs conform to some platonic ideal of "Truly object oriented", how about we focus on adhering to good engineering principles, making our code easy to read and understand, and using the features of a language that are productive and helpful, regardless of whether they are "OOP" enough or not.


Copy/Paste IS the root of all evil.


Writing it yourself can be a valid option.

In my experience there seems to be too much enthusiasm when it comes to using 3rd party code to solve a problem. The option of solving the problem by themselves does usually not cross peoples minds. Although don't get me wrong, I am not propagating to never ever use libraries. What I am saying is: among the possible frameworks and modules you are considering to use, add the option of implementing the solution yourself.

But why would you code your own version?

  • Don't reinvent the wheel. But, if you only need a piece of wood, do you really need a whole cart wheel? In other words, do you really need openCV to flip an image along an axis?
  • Compromise. You usually have to make compromises concerning your design, in order to be able to use a specific library. Is the amount of changes you have to incorporate worth the functionality you will receive?
  • Learning. You have to learn to use these new frameworks and modules. How long will it take you? Is it worth your while? Will it take longer to learn than to implement?
  • Cost. Not everything is for free. Although, this includes your time. Consider how much time this software you are about to use will save you and if it is worth it's price? (Also remember that you have to invest time to learn it)
  • You are a programmer, not ... a person who just clicks things together (sorry, couldn't think of anything witty).

The last point is debatable.


Singletons are not evil

There is a place for singletons in the real world, and methods to get around them (i.e. monostate pattern) are simply singletons in disguise. For instance, a Logger is a perfect candidate for a singleton. Addtionally, so is a message pump. My current app uses distributed computing, and different objects need to be able to send appropriate messages. There should only be one message pump, and everyone should be able to access it. The alternative is passing an object to my message pump everywhere it might be needed and hoping that a new developer doesn't new one up without thinking and wonder why his messages are going nowhere. The uniqueness of the singleton is the most important part, not its availability. The singleton has its place in the world.


You don't have to program everything

I'm getting tired that everything, but then everything needs to be stuffed in a program, like that is always faster. everything needs to be webbased, evrything needs to be done via a computer. Please, just use your pen and paper. it's faster and less maintenance.


It's ok to write garbage code once in a while

Sometimes a quick and dirty piece of garbage code is all that is needed to fulfill a particular task. Patterns, ORMs, SRP, whatever... Throw up a Console or Web App, write some inline sql ( feels good ), and blast out the requirement.


If your text editor doesn't do good code completion, you're wasting everyone's time.

Quickly remembering thousands of argument lists, spellings, and return values (not to mention class structures and similarly complex organizational patterns) is a task computers are good at and people (comparatively) are not. I buy wholeheartedly that slowing yourself down a bit and avoiding the gadget/feature cult is a great way to increase efficiency and avoid bugs, but there is simply no benefit to spending 30 seconds hunting unnecessarily through sourcecode or docs when you could spend nil... especially if you just need a spelling (which is more often than we like to admit).

Granted, if there isn't an editor that provides this functionality for your language, or the task is simple enough to knock out in the time it would take to load a heavier editor, nobody is going to tell you that Eclipse and 90 plugins is the right tool. But please don't tell me that the ability to H-J-K-L your way around like it's 1999 really saves you more time than hitting escape every time you need a method signature... even if you do feel less "hacker" doing it.

Thoughts?


VB 6 could be used for good as well as evil. It was a Rapid Application Development environment in a time of over complicated coding.

I have hated VB vehemently in the past, and still mock VB.NET (probably in jest) as a Fisher Price language due to my dislike of classical VB, but in its day, nothing could beat it for getting the job done.


The word 'evil' is an abused and overused word on Stackoverflow and simular forums.

People who use it have too little imagination.


Microsoft should stop supporting anything dealing with Visual Basic.


Objects Should Never Be In An Invalid State

Unfortunately, so many of the ORM framework mandate zero-arg constructors for all entity classes, using setters to populate the member variables. In those cases, it's very difficult to know which setters must be called in order to construct a valid object.

MyClass c = new MyClass(); // Object in invalid state. Doesn't have an ID.
c.setId(12345); // Now object is valid.

In my opinion, it should be impossible for an object to ever find itself in an invalid state, and the class's API should actively enforce its class invariants after every method call.

Constructors and mutator methods should atomically transition an object from one valid state to another. This is much better:

MyClass c = new MyClass(12345); // Object starts out valid. Stays valid.

As the consumer of some library, it's a huuuuuuge pain to keep track of whether all the right setters have been invoked before attempting to use an object, since the documentation usually provides no clues about the class's contract.


You shouldn't settle on the first way you find to code something that "works."

I really don't think this should be controversial, but it is. People see an example from elsewhere in the code, from online, or from some old "Teach yourself Advanced Power SQLJava#BeansServer in 3.14159 minutes" book dated 1999, and they think they know something and they copy it into their code. They don't walk through the example to find out what each line does. They don't think about the design of their program and see if there might be a more organized or more natural way to do the same thing. They don't make any attempt at keeping their skill sets up to date to learn that they are using ideas and methods deprecated in the last year of the previous millenium. They don't seem to have the experience to learn that what they're copying has created specific horrific maintenance burdens for programmers for years and that they can be avoided with a little more thought.

In fact, they don't even seem to recognize that there might be more than one way to do something.

I come from the Perl world, where one of the slogans is "There's More Than One Way To Do It." (TMTOWTDI) People who've taken a cursory look at Perl have written it off as "write-only" or "unreadable," largely because they've looked at crappy code written by people with the mindset I described above. Those people have given zero thought to design, maintainability, organization, reduction of duplication in code, coupling, cohesion, encapsulation, etc. They write crap. Those people exist programming in every language, and easy to learn languages with many ways to do things give them plenty of rope and guns to shoot and hang themselves with. Simultaneously.

But if you hang around the Perl world for longer than a cursory look, and watch what the long-timers in the community are doing, you see a remarkable thing: the good Perl programmers spend some time seeking to find the best way to do something. When they're naming a new module, they ask around for suggestions and bounce their ideas off of people. They hand their code out to get looked at, critiqued, and modified. If they have to do something nasty, they encapsulate it in the smallest way possible in a module for use in a more organized way. Several implementations of the same idea might hang around for awhile, but they compete for mindshare and marketshare, and they compete by trying to do the best job, and a big part of that is by making themselves easily maintainable. Really good Perl programmers seem to think hard about what they are doing and looking for the best way to do things, rather than just grabbing the first idea that flits through their brain.

Today I program primarily in the Java world. I've seen some really good Java code, but I see a lot of junk as well, and I see more of the mindset I described at the beginning: people settle on the first ugly lump of code that seems to work, without understanding it, without thinking if there's a better way.

You will see both mindsets in every language. I'm not trying to impugn Java specifically. (Actually I really like it in some ways ... maybe that should be my real controversial opinion!) But I'm coming to believe that every programmer needs to spend a good couple of years with a TMTOWTDI-style language, because even though conventional wisdom has it that this leads to chaos and crappy code, it actually seems to produce people who understand that you need to think about the repercussions of what you are doing instead of trusting your language to have been designed to make you do the right thing with no effort.

I do think you can err too far in the other direction: i.e., perfectionism that totally ignores your true needs and goals (often the true needs and goals of your business, which is usually profitability). But I don't think anyone can be a truly great programmer without learning to invest some greater-than-average effort in thinking about finding the best (or at least one of the best) way to code what they are doing.


Copy/Pasting is not an antipattern, it fact it helps with not making more bugs

My rule of thumb - typing only something that cannot be copy/pasted. If creating similar method, class, or file - copy existing one and change what's needed. (I am not talking about duplicating a code that should have been put into a single method).

I usually never even type variable names - either copy pasting them or using IDE autocompletion. If need some DAO method - copying similar one and changing what's needed (even if 90% will be changed). May look like extreme laziness or lack of knowledge to some, but I almost never have to deal with problems caused my misspelling something trivial, and they are usually tough to catch (if not detected on a compile level).

Whenever I step away from my copy-pasting rule and start typing stuff I always misspelling something (it's just a statistics, nobody can write perfect text off the bat) and then spending more time trying to figure out where.


According to the amount of feedback I've gotten, my most controversial opinion, apparently, is that programmers don't always read the books they claim to have read. This is followed closely by my opinion that a programmer with a formal education is better than the same programmer who is self-taught (but not necessarily better than a different programmer who is self-taught).


Women make better programmers than men.

The female programmers I've worked with don't get wedded to "their" code as much as men do. They're much more open to criticism and new ideas.


Not very controversial AFAIK but... AJAX was around way before the term was coined and everyone needs to 'let it go'. People were using it for all sorts of things. No one really cared about it though.

Then suddenly POW! Someone coined the term and everyone jumped on the AJAX bandwagon. Suddenly people are now experts in AJAX, as if 'experts' in dynamically loading data weren't around before. I think its one of the biggest contributing factors that is leading to the brutal destruction of the internet. That and "Web 2.0".


I often get shouted down when I claim that the code is merely an expression of my design. I quite dislike the way I see so many developers design their system "on the fly" while coding it.

The amount of time and effort wasted when one of these cowboys falls off his horse is amazing - and 9 times out of 10 the problem they hit would have been uncovered with just a little upfront design work.

I feel that modern methodologies do not emphasize the importance of design in the overall software development process. Eg, the importance placed on code reviews when you haven't even reviewed your design! It's madness.


Lower camelCase is stupid and unsemantic

Using lower camelCase makes the name/identifier ("name" used from this point) look like a two-part thing. Upper CamelCase however, gives the clear indication that all the words belong together.

Hungarian notation is different ... because the first part of the name is a type indicator, and so it has a separate meaning from the rest of the name.

Some might argue that lower camelCase should be used for functions/procedures, especially inside classes. This is popular in Java and object oriented PHP. However, there is no reason to do that to indicate that they are class methods, because BY THE WAY THEY ARE ACCESSED it becomes more than clear that these are just that.

Some code examples:

# Java
myobj.objMethod() 
# doesn't the dot and parens indicate that objMethod is a method of myobj?

# PHP
$myobj->objMethod() 
# doesn't the pointer and parens indicate that objMethod is a method of myobj?

Upper CamelCase is useful for class names, and other static names. All non-static content should be recognised by the way they are accessed, not by their name format(!)

Here's my homogenous code example, where name behaviours are indicated by other things than their names... (also, I prefer underscore to separate words in names).

# Java
my_obj = new MyObj() # Clearly a class, since it's upper CamelCase
my_obj.obj_method() # Clearly a method, since it's executed
my_obj.obj_var # Clearly an attribute, since it's referenced

# PHP
$my_obj = new MyObj()
$my_obj->obj_method()
$my_obj->obj_var
MyObj::MyStaticMethod()

# Python
MyObj = MyClass # copies the reference of the class to a new name
my_obj = MyObj() # Clearly a class, being instantiated
my_obj.obj_method() # Clearly a method, since it's executed
my_obj.obj_var # clearly an attribute, since it's referenced
my_obj.obj_method # Also, an attribute, but holding the instance method.
my_method = myobj.obj_method # Instance method
my_method() # Same as myobj.obj_method()
MyClassMethod = MyObj.obj_method # Attribute holding the class method
MyClassMethod(myobj) # Same as myobj.obj_method()
MyClassMethod(MyObj) # Same as calling MyObj.obj_method() as a static classmethod

So there goes, my completely obsubjective opinion on camelCase.


Commenting is bad

Whenever code needs comments to explain what it is doing, the code is too complicated. I try to always write code that is self-explanatory enough to not need very many comments.


That software can be bug free if you have the right tools and take the time to write it properly.


That, erm, people should comment their code? It seems to be pretty controversial around here...

The code only tells me what actually it does; not what it was supposed to do

The time I see a function calculating the point value of an Australian Bond Future is the time I want to see some comments that indicate what the coder thought the calculation should be!


All variables/properties should be readonly/final by default.

The reasoning is a bit analogous to the sealed argument for classes, put forward by Jon. One entity in a program should have one job, and one job only. In particular, it makes absolutely no sense for most variables and properties to ever change value. There are basically two exceptions.

  1. Loop variables. But then, I argue that the variable actually doesn't change value at all. Rather, it goes out of scope at the end of the loop and is re-instantiated in the next turn. Therefore, immutability would work nicely with loop variables and everyone who tries to change a loop variable's value by hand should go straight to hell.

  2. Accumulators. For example, imagine the case of summing over the values in an array, or even a list/string that accumulates some information about something else.

    Today, there are better means to accomplish the same goal. Functional languages have higher-order functions, Python has list comprehension and .NET has LINQ. In all these cases, there is no need for a mutable accumulator / result holder.

    Consider the special case of string concatenation. In many environments (.NET, Java), strings are actually immutables. Why then allow an assignment to a string variable at all? Much better to use a builder class (i.e. a StringBuilder) all along.

I realize that most languages today just aren't built to acquiesce in my wish. In my opinion, all these languages are fundamentally flawed for this reason. They would lose nothing of their expressiveness, power, and ease of use if they would be changed to treat all variables as read-only by default and didn't allow any assignment to them after their initialization.


To be really controversial:

You know nothing!

or in other words:

I know that I know nothing.

(this could be paraphrased in many kinds but I think you get it.)

When starting with computers/developing, IMHO there are three stages everyone has to walk through:

The newbie: knows nothing (this is fact)

The intermediate: thinks he knows something/very much(/all) (this is conceit)

The professional: knows that he knows nothing (because as a programmer most time you have to work on things you have never done before). This is no bad thing: I love to familiarize myself to new things all the time.

I think as a programmer you have to know how to learn - or better: To learn to learn (because remember: You know nothing! ;)).


People complain about removing 'goto' from the language. I happen to think that any sort of conditional jump is highly overrated and that 'if' 'while' 'switch' and a general purpose 'for' loop are highly overrated and should be used with extreme caution.

Everytime you make a comparison and conditional jump a tiny bit of complexity is added and this complexity adds up quickly once the call stack gets a couple hundred items deep.

My first choice is to avoid the conditional, but if it isn't practical my next preference is to keep the conditional complexity in constructors or factory methods.

Clearly this isn't practical for many projects and algorithms (like control flow loops), but it is something I enjoy pushing on.

-Rick


1) The Business Apps farce:

I think that the whole "Enterprise" frameworks thing is smoke and mirrors. J2EE, .NET, the majority of the Apache frameworks and most abstractions to manage such things create far more complexity than they solve.

Take any regular Java or .NET ORM, or any supposedly modern MVC framework for either which does "magic" to solve tedious, simple tasks. You end up writing huge amounts of ugly XML boilerplate that is difficult to validate and write quickly. You have massive APIs where half of those are just to integrate the work of the other APIs, interfaces that are impossible to recycle, and abstract classes that are needed only to overcome the inflexibility of Java and C#. We simply don't need most of that.

How about all the different application servers with their own darned descriptor syntax, the overly complex database and groupware products?

The point of this is not that complexity==bad, it's that unnecessary complexity==bad. I've worked in massive enterprise installations where some of it was necessary, but even in most cases a few home-grown scripts and a simple web frontend is all that's needed to solve most use cases.

I'd try to replace all of these enterprisey apps with simple web frameworks, open source DBs, and trivial programming constructs.

2) The n-years-of-experience-required:

Unless you need a consultant or a technician to handle a specific issue related to an application, API or framework, then you don't really need someone with 5 years of experience in that application. What you need is a developer/admin who can read documentation, who has domain knowledge in whatever it is you're doing, and who can learn quickly. If you need to develop in some kind of language, a decent developer will pick it up in less than 2 months. If you need an administrator for X web server, in two days he should have read the man pages and newsgroups and be up to speed. Anything less and that person is not worth what he is paid.

3) The common "computer science" degree curriculum:

The majority of computer science and software engineering degrees are bull. If your first programming language is Java or C#, then you're doing something wrong. If you don't get several courses full of algebra and math, it's wrong. If you don't delve into functional programming, it's incomplete. If you can't apply loop invariants to a trivial for loop, you're not worth your salt as a supposed computer scientist. If you come out with experience in x and y languages and object orientation, it's full of s***. A real computer scientist sees a language in terms of the concepts and syntaxes it uses, and sees programming methodologies as one among many, and has such a good understanding of the underlying philosophies of both that picking new languages, design methods, or specification languages should be trivial.


Don't worry too much about what language to learn, use the industry heavy weights like c# or python. Languages like Ruby are fun in the bedroom, but don't do squat in workplace scenarios. Languages like c# and Java can handle small to the very large software projects. If anyone says otherwise, then your talking about a scripting language. Period!

Before starting a project, consider how much support and code samples are available on the net. Again, choosing a language like Ruby which has very few code samples on the web compared to Java for example, will only cause you grief further down the road when your stuck on a problem.

You can't post a message on a forum and expect an answer back while your boss is asking you how your coding is going. What are you going to say? "I'm waiting for someone to help me out on this forum"

Learn one language and learn it good. Learning multiple languages may carry over skills and practices, but you'll only even be OK at all of them. Be good at one. There are entire books dedicated to Threading in Java which, when you think about it, is only one namespace out of over 100.

Master one or be ok at lots.


Relational database systems will be the best thing since sliced bread...

... when we (hopefully) get them, that is. SQL databases suck so hard it's not funny.

What I find amusing (if sad) is certified DBAs who think an SQL database system is a relational one. Speaks volumes for the quality of said certification.

Confused? Read C. J. Date's books.

edit

Why is it called Relational and what does that word mean?

These days, a programmer (or a certified DBA, wink) with a strong (heck, any) mathematical background is an exception rather than the common case (I'm an instance of the common case as well). SQL with its tables, columns and rows, as well as the joke called Entity/Relationship Modelling just add insult to the injury. No wonder the misconception that Relational Database Systems are called that because of some Relationships (Foreign Keys?) between Entities (tables) is so pervasive.

In fact, Relational derives from the mathematical concept of relations, and as such is intimately related to set theory and functions (in the mathematical, not any programming, sense).

[http://en.wikipedia.org/wiki/Finitary_relation][2]:

In mathematics (more specifically, in set theory and logic), a relation is a property that assigns truth values to combinations (k-tuples) of k individuals. Typically, the property describes a possible connection between the components of a k-tuple. For a given set of k-tuples, a truth value is assigned to each k-tuple according to whether the property does or does not hold.

An example of a ternary relation (i.e., between three individuals) is: "X was-introduced-to Y by Z", where (X,Y,Z) is a 3-tuple of persons; for example, "Beatrice Wood was introduced to Henri-Pierre Roché by Marcel Duchamp" is true, while "Karl Marx was introduced to Friedrich Engels by Queen Victoria" is false.

Wikipedia makes it perfectly clear: in a SQL DBMS, such a ternary relation would be a "table", not a "foreign key" (I'm taking the liberty to rename the "columns" of the relation: X = who, Y = to, Z = by):

CREATE TABLE introduction (
  who INDIVIDUAL NOT NULL
, to INDIVIDUAL NOT NULL
, by INDIVIDUAL NOT NULL
, PRIMARY KEY (who, to, by)
);

Also, it would contain (among others, possibly), this "row":

INSERT INTO introduction (
  who
, to
, by
) VALUES (
  'Beatrice Wood'
, 'Henri-Pierre Roché'
, 'Marcel Duchamp'
);

but not this one:

INSERT INTO introduction (
  who
, to
, by
) VALUES (
  'Karl Marx'
, 'Friedrich Engels'
, 'Queen Victoria'
);

Relational Database Dictionary:

relation (mathematics) Given sets s1, s2, ..., sn, not necessarily distinct, r is a relation on those sets if and only if it's a set of n-tuples each of which has its first element from s1, its second element from s2, and so on. (Equivalently, r is a subset of the Cartesian product s1 x s2 x ... x sn.)

Set si is the ith domain of r (i = 1, ..., n). Note: There are several important logical differences between relations in mathematics and their relational model counterparts. Here are some of them:

  • Mathematical relations have a left-to-right ordering to their attributes.
  • Actually, mathematical relations have, at best, only a very rudimentary concept of attributes anyway. Certainly their attributes aren't named, other than by their ordinal position.
  • As a consequence, mathematical relations don't really have either a heading or a type in the relational model sense.
  • Mathematical relations are usually either binary or, just occasionally, unary. By contrast, relations in the relational model are of degree n, where n can be any nonnegative integer.
  • Relational operators such as JOIN, EXTEND, and the rest were first defined in the context of the relational model specifically; the mathematical theory of relations includes few such operators.

And so on (the foregoing isn't meant to be an exhaustive list).


Manually halting a program is an effective, proven way to find performance problems.

Believable? Not to most. True? Absolutely.

Programmers are far more judgmental than necessary.

Witness all the things considered "evil" or "horrible" in these posts.

Programmers are data-structure-happy.

Witness all the discussions of classes, inheritance, private-vs-public, memory management, etc., versus how to analyze requirements.


Stay away from Celko!!!!

http://www.dbdebunk.com/page/page/857309.htm

I think it makes a lot more sense to use surrogate primary keys then "natural" primary keys.


@ocdecio: Fabian Pascal gives (in chapter 3 of his book Practical issues in database management, cited in point 3 at the page that you link) as one of the criteria for choosing a key that of stability (it always exists and doesn't change). When a natural key does not possesses such property, than a surrogate key must be used, for evident reasons, to which you hint in comments.

You don't know what he wrote and you have not bothered to check, otherwise you could discover that you actually agree with him. Nothing controversial there: he was saying "don't be dogmatic, adapt general guidelines to circumstances, and, above all, think, use your brain instead of a dogmatic/cookbook/words-of-guru approach".


My controversial view is that the "While" construct should be removed from all programming languages.

You can easily replicate While using "Repeat" and a boolean flag, and I just don't believe that it's useful to have the two structures. In fact, I think that having both "Repeat...Until" and "While..EndWhile" in a language confuses new programmers.

Update - Extra Notes

One common mistake new programmers make with While is they assume that the code will break as soon as the tested condition flags false. So - If the While test flags false half way through the code, they assume a break out of the While Loop. This mistake isn't made as much with Repeat.

I'm actually not that bothered which of the two loops types is kept, as long as there's only one loop type. Another reason I have for choosing Repeat over While is that "While" functionality makes more sense written using "repeat" than the other way around.

Second Update: I'm guessing that the fact I'm the only person currently running with a negative score here means this actually is a controversial opinion. (Unlike the rest of you. Ha!)


Whenever you expose a mutable class to the outside world, you should provide events to make it possible to observe its mutation. The extra effort may also convince you to make it immutable after all.


Software Development is a VERY small subset of Computer Science.

People sometimes seem to think the two are synonymous, but in reality there are so many aspects to computer science that the average developer rarely (if ever) gets exposed to. Depending on one's career goals, I think there are a lot of CS graduates out there who would probably have been better off with some sort of Software Engineering education.

I value education highly, have a BS in Computer science and am pursuing a MS in it part time, but I think that many people who obtain these degrees treat the degree as a means to an end and benefit very little. I know plenty of people who took the same Systems Software course I took, wrote the same assembler I wrote, and to this day see no value in what they did.


Programmers need to talk to customers

Some programmers believe that they don't need to be the ones talking to customers. It's a sure way for your company to write something absolutely brilliant which no one can work out what it's for or how it was intended to be used.

You can't expect product managers and business analysts to make all the decisions. In fact, programmers should be making 990 out of the 1000 (often small) decisions that go into creating a module or feature, otherwise the product would simply never ship! So make sure your decisions are informed. Understand your customers, work with them, watch them use your software.

If you're going the write the best code, you want people to use it. Take an interest in your user base and learn from the "dumb idiots" who are out there. Don't be afraid, they'll actually love you for it.


To produce great software, you need domain specialists as much as good developers.


Detailed designs are a waste of time, and if an engineer needs them in order to do a decent job, then it's not worth employing them!

OK, so a couple of ideas are thrown together here:

1) the old idea of waterfall development where you supposedly did all your design up front, resulting in some glorified extremely detailed class diagrams, sequence diagrams etc. etc., was a complete waste of time. As I once said to a colleague, I'll be done with design once the code is finished. Which I think is what agile is partly a recognition of - that the code is the design, and that any decent developer is continually refactoring. This of course, makes the idea that your class diagrams are out of date laughable - they always will be.

2) management often thinks that you can usefully take a poor engineer and use them as a 'code monkey' - in other words they're not particularly talented, but heck - can't you use them to write some code. Well.. no! If you have to spend so much time writing detailed specs that you're basically specifying the code, then it will be quicker to write it yourself. You're not saving any time. If a developer isn't smart enough to use their own imagination and judgement they're not worth employing. (Note, I'm not talking about junior engineers who are able to learn. Plenty of 'senior engineers' fall into this category.)


Defects and Enhancement Requests are the Same

Unless you are developing software on a fixed-price contract, there should be no difference when prioritizing your backlog between "bugs" and "enhancements" and "new feature" requests. OK - maybe that's not controversial, but I have worked on enterprise IT projects where the edict was that "all open bugs must be fixed in the next release", even if that left no developer time for the most desirable new features. So, a problem which was encountered by 1% of the users, 1% of the time took precedence over a new feature would might be immediately useful to 90% of the users. I like to take my entire project backlog, put estimates around each item and take it to the user community for prioritization - with items not classified as "defect", "enhancement", etc.


I have a few... there's exceptions to everything so these are not hard and fast but they do apply in most cases

Nobody cares if your website validates, is XHTML strict, is standards-compliant, or has a W3C badge.

It may earn you some high-fives from fellow Web developers, but the rest of people looking at your site could give a crap whether you've validated your code or not. the vast majority of Web surfers are using IE or Firefox, and since both of those browsers are forgiving of nonstandards, nonstrict, invalidated HTML then you really dont need to worry about it. If you've built a site for a car dealer, a mechanic, a radio station, a church, or a local small business, how many people in any of those businesses' target demographics do you think care about valid HTML? I'd hazard a guess it's pretty close to 0.

Most open-source software is useless, overcomplicated crap.

Let me install this nice piece of OSS I've found. It looks like it should do exactly what I want! Oh wait, first I have to install this other window manager thingy. OK. Then i need to get this command-line tool and add it to my path. Now I need the latest runtimes for X, Y, and Z. now i need to make sure i have these processes running. ok, great... its all configured. Now let me learn a whole new set of commands to use it. Oh cool, someone built a GUI for it. I guess I don't need to learn these commands. Wait, I need this library on here to get the GUI to work. Gotta download that now. ok, now its working...crap, I can't figure out this terrible UI.

sound familiar? OSS is full of complication for complication's sake, tricky installs that you need to be an expert to perform, and tools that most people wouldn't know what to do with anyway. So many projects fall by the wayside, others are so niche that very few people would use them, and some of the decent ones (FlowPlayer, OSCommerce, etc) have such ridiculously overcomplicated and bloated source code that it defeats the purpose of being able to edit the source. You can edit the source... if you can figure out which of the 400 files contains the code that needs modification. You're really in trouble when you learn that its all 400 of them.


Regurgitating well known sayings by programming greats out of context with the zeal of a fanatic and the misplaced assumption that they are ironclad rules really gets my goat. For example 'premature optimization is the root of all evil' as covered by this thread.

IMO, many technical problems and solutions are very context sensitive and the notion of global best practices is a fallacy.


Linq2Sql is not that bad

I've come across a lot of posts trashing Linq2Sql. I know it's not perfect, but what is?

Personally, I think it has its drawbacks, but overall it can be great for prototyping, or for developing small to medium apps. When I consider how much time it has saved me from writing boring DAL code, I can't complain, especially considering the alternatives we had not so long ago.


SQL could and should have been done better. Because its original spec was limited, various venders have been extending the language in different directions for years. SQL that is written for MS-SQL is different than SQL for Oracle, IBM, MySQL, Sybase, etc. Other serious languages (take C++ for example) were carefully standardized so that C++ written under one compiler will generally compile unmodified under another. Why couldn't SQL have been designed and standardized better?

HTML was a seriously broken choice as a browser display language. We've spent years extending it through CSS, XHTML, Javascript, Ajax, Flash, etc. in order to make a useable UI, and the result is still not as good as your basic thick-client windows app. Plus, a competent web programmer now needs to know three or four languages in order to make a decent UI.

Oh yeah. Hungarian notation is an abomination.


The ability to create UML diagrams similar to pretzels with mad cow disease is not actually a useful software development skill.

The whole point of diagramming code is to visualise connections, to see the shape of a design. But once you pass a certain rather low level of complexity, the visualisation is too much to process mentally. Making connections pictorially is only simple if you stick to straight lines, which typically makes the diagram much harder to read than if the connections were cleverly grouped and routed along the cardinal directions.

Use diagrams only for broad communication purposes, and only when they're understood to be lies.


You must know how to type to be a programmer.

It's controversial among people who don't know how to type, but who insist that they can two-finger hunt-and-peck as fast as any typist, or that they don't really need to spend that much time typing, or that Intellisense relieves the need to type...

I've never met anyone who does know how to type, but insists that it doesn't make a difference.

See also: Programming's Dirtiest Little Secret


Reflection has no place in production code

Reflection breaks static analysis including refactoring tools and static type checking. Reflection also breaks the normal assumptions developers have about code. For example: adding a method to a class (that doesn't shadow some other method in the class) should never have any effect, but when reflection is being used, some other piece of code may "discover" the new method and decide to call it. Actually determining if such code exists is intractable.

I do think it's fine to use reflection and tests and in code generators.

Yes, this does mean that I try to avoid frameworks that use reflection. (it's too bad that Java lacks proper compile-time meta-programming support)


Developers should be able to modify production code without getting permission from anyone as long as they document their changes and notify the appropriate parties.


"Good Coders Code and Great Coders Reuse It" This is happening right now But "Good Coder" is the only ONE who enjoy that code. and "Great Coders" are for only to find out the bug in to that because they don't have the time to think and code. But they have time for find the bug in that code.

so don't criticize!!!!!!!!

Create your own code how YOU want.


Not everything needs to be encapsulated into its own method. Some times it is ok to have a method do more then one thing.


Software development is just a job

Don't get me wrong, I enjoy software development a lot. I've written a blog for the last few years on the subject. I've spent enough time on here to have >5000 reputation points. And I work in a start-up doing typically 60 hour weeks for much less money than I could get as a contractor because the team is fantastic and the work is interesting.

But in the grand scheme of things, it is just a job.

It ranks in importance below many things such as family, my girlfriend, friends, happiness etc., and below other things I'd rather be doing if I had an unlimited supply of cash such as riding motorbikes, sailing yachts, or snowboarding.

I think sometimes a lot of developers forget that developing is just something that allows us to have the more important things in life (and to have them by doing something we enjoy) rather than being the end goal in itself.


When many new technologies appear on the scene I only learn enough about them to decide if I need them right now.

If not, I put them aside until the rough edges are knocked off by "early adopters" and then check back again every few months / years.


C++ is a good language

I practically got lynched in another question a week or two back for saying that C++ wasn't a very nice language. So now I'll try saying the opposite. ;)

No, seriously, the point I tried to make then, and will try again now, is that C++ has plenty of flaws. It's hard to deny that. It's so extremely complicated that learning it well is practically something you can dedicate your entire life to. It makes many common tasks needlessly hard, allows the user to plunge head-first into a sea of undefined behavior and unportable code, with no warnings given by the compiler.

But it's not the useless, decrepit, obsolete, hated language that many people try to make it. It shouldn't be swept under the carpet and ignored. The world wouldn't be a better place without it. It has some unique strengths that, unfortunately, are hidden behind quirky syntax, legacy cruft and not least, bad C++ teachers. But they're there.

C++ has many features that I desperately miss when programming in C# or other "modern" languages. There's a lot in it that C# and other modern languages could learn from.

It's not blindly focused on OOP, but has instead explored and pioneered generic programming. It allows surprisingly expressive compile-time metaprogramming producing extremely efficient, robust and clean code. It took in lessons from functional programming almost a decade before C# got LINQ or lambda expressions.

It allows you to catch a surprising number of errors at compile-time through static assertions and other metaprogramming tricks, which eases debugging vastly, and even beats unit tests in some ways. (I'd much rather catch an error at compile-time than afterwards, when I'm running my tests).

Deterministic destruction of variables allows RAII, an extremely powerful little trick that makes try/finally blocks and C#'s using blocks redundant.

And while some people accuse it of being "design by committee", I'd say yes, it is, and that's actually not a bad thing in this case. Look at Java's class library. How many classes have been deprecated again? How many should not be used? How many duplicate each others' functionality? How many are badly designed?

C++'s standard library is much smaller, but on the whole, it's remarkably well designed, and except for one or two minor warts (vector<bool>, for example), its design still holds up very well. When a feature is added to C++ or its standard library, it is subjected to heavy scrutiny. Couldn't Java have benefited from the same? .NET too, although it's younger and was somewhat better designed to begin with, is still accumulating a good handful of classes that are out of sync with reality, or were badly designed to begin with.

C++ has plenty of strengths that no other language can match. It's a good language


Opinion: Unit tests don't need to be written up front, and sometimes not at all.

Reasoning: Developers suck at testing their own code. We do. That's why we generally have test teams or QA groups.

Most of the time the code we write is to intertwined with other code to be tested separately, so we end up jumping through patterned hoops to provide testability. Not that those patterns are bad, but they can sometimes add unnecessary complexity, all for the sake of unit testing...

... which often doesn't work anyway. To write a comprehensive unit test requires alot of time. Often more time than we're willing to give. And the more comprehensive the test, the more brittle it becomes if the interface of the thing it's testing changes, forcing a rewrite of a test that no longer compiles.


Java is the COBOL of our generation.

Everyone learns to code it. There code for it running in big companies that will try to keep it running for decades. Everyone comes to despise it compared to all the other choices out there but are forced to use it anyway because it pays the bills.


Apparently it is controversial that IDE's should check to see whether they can link up the code they create before wasting time compiling

But I'm of the opinion that I shouldn't compile a zillion lines of code only to realize that Windows has a lock on the file I'm trying to create because another programmer has some weird threading issue that requires him to Delay Unloading DLLs for 3 minutes after they aren't supposed to be used.


That software can be bug free if you have the right tools and take the time to write it properly.


If I were being controversial, I'd have to suggest that Jon Skeet isn't omnipotent..


(Unnamed) tuples are evil

  • If you're using tuples as a container for several objects with unique meanings, use a class instead.
  • If you're using them to hold several objects that should be accessible by index, use a list.
  • If you're using them to return multiple values from a method, use Out parameters instead (this does require that your language supports pass-by-reference)

  • If it's part of a code obfuscation strategy, keep using them!

I see people using tuples just because they're too lazy to bother giving NAMES to their objects. Users of the API are then forced to access items in the tuple based on a meaningless index instead of a useful name.


I firmly believe that unmanaged code isn't worth the trouble. The extra maintainability expenses associated with hunting down memory leaks which even the best programmers introduce occasionally far outweigh the performance to be gained from a language like C++. If Java, C#, etc. can't get the performance you need, buy more machines.


Unit Testing won't help you write good code

The only reason to have Unit tests is to make sure that code that already works doesn't break. Writing tests first, or writing code to the tests is ridiculous. If you write to the tests before the code, you won't even know what the edge cases are. You could have code that passes the tests but still fails in unforeseen circumstances.

And furthermore, good developers will keep cohesion low, which will make the addition of new code unlikely to cause problems with existing stuff.

In fact, I'll generalize that even further,

Most "Best Practices" in Software Engineering are there to keep bad programmers from doing too much damage.

They're there to hand-hold bad developers and keep them from making dumbass mistakes. Of course, since most developers are bad, this is a good thing, but good developers should get a pass.


Opinion: Not having function definitions, and return types can lead to flexible and readable code.

This opinion probably applies more to interpreted languages than compiled. Requiring a return type, and a function argument list, are great for things like intellisense to auto document your code, but they are also restrictions.

Now don't get me wrong, I am not saying throw away return types, or argument lists. They have their place. And 90% of the time they are more of a benefit than a hindrance.

There are times and places when this is useful.


Exceptions considered harmful.


The C++ STL library is so general purpose that it is optimal for no one.


Pagination is never what the user wants

If you start having the discussion about where to do pagination, in the database, in the business logic, on the client, etc. then you are asking the wrong question. If your app is giving back more data than the user needs, figure out a way for the user to narrow down what they need based on real criteria, not arbitrary sized chunks. And if the user really does want all those results, then give them all the results. Who are you helping by giving back 20 at a time? The server? Is that more important than your user?

[EDIT: clarification, based on comments]

As a real world example, let's look at this Stack Overflow question. Let's say I have a controversial programming opinion. Before I post, I'd like to see if there is already an answer that addresses the same opinion, so I can upvote it. The only option I have is to click through every page of answers.

I would prefer one of these options:

  1. Allow me to search through the answers (a way for me to narrow down what I need based on real criteria).

  2. Allow me to see all the answers so I can use my browser's "find" option (give me all the results).

The same applies if I just want to find an answer I previously read, but can't find anymore. I don't know when it was posted or how many votes it has, so the sorting options don't help. And even if I did, I still have to play a guessing game to find the right page of results. The fact that the answers are paginated and I can directly click into one of a dozen pages is no help at all.

--
bmb


Here's one which has seemed obvious to me for many years but is anathema to everyone else: it is almost always a mistake to switch off C (or C++) assertions with NDEBUG in 'release' builds. (The sole exceptions are where the time or space penalty is unacceptable).

Rationale: If an assertion fails, your program has entered a state which

  • has never been tested
  • the developer was unable to code a recovery strategy for
  • the developer has effectively documented as being inconceivable.

Yet somehow 'industry best practice' is that the thing should just muddle on and hope for the best when it comes to live runs with your customers' data.


I have a few... there's exceptions to everything so these are not hard and fast but they do apply in most cases

Nobody cares if your website validates, is XHTML strict, is standards-compliant, or has a W3C badge.

It may earn you some high-fives from fellow Web developers, but the rest of people looking at your site could give a crap whether you've validated your code or not. the vast majority of Web surfers are using IE or Firefox, and since both of those browsers are forgiving of nonstandards, nonstrict, invalidated HTML then you really dont need to worry about it. If you've built a site for a car dealer, a mechanic, a radio station, a church, or a local small business, how many people in any of those businesses' target demographics do you think care about valid HTML? I'd hazard a guess it's pretty close to 0.

Most open-source software is useless, overcomplicated crap.

Let me install this nice piece of OSS I've found. It looks like it should do exactly what I want! Oh wait, first I have to install this other window manager thingy. OK. Then i need to get this command-line tool and add it to my path. Now I need the latest runtimes for X, Y, and Z. now i need to make sure i have these processes running. ok, great... its all configured. Now let me learn a whole new set of commands to use it. Oh cool, someone built a GUI for it. I guess I don't need to learn these commands. Wait, I need this library on here to get the GUI to work. Gotta download that now. ok, now its working...crap, I can't figure out this terrible UI.

sound familiar? OSS is full of complication for complication's sake, tricky installs that you need to be an expert to perform, and tools that most people wouldn't know what to do with anyway. So many projects fall by the wayside, others are so niche that very few people would use them, and some of the decent ones (FlowPlayer, OSCommerce, etc) have such ridiculously overcomplicated and bloated source code that it defeats the purpose of being able to edit the source. You can edit the source... if you can figure out which of the 400 files contains the code that needs modification. You're really in trouble when you learn that its all 400 of them.


Never implement anything as a singleton.

You can decide not to construct more than one instance, but always ensure you implementation can handle more.

I have yet to find any scenario where using a singleton is actually the right thing to do.

I got into some very heated discussions over this in the last few years, but in the end I was always right.


Opinion: That frameworks and third part components should be only used as a last resort.

I often see programmers immediately pick a framework to accomplish a task without learning the underlying approach it takes to work. Something will inevitably break, or we'll find a limition we didn't account for and we'll be immediately stuck and have to rethink major part of a system. Frameworks are fine to use as long it is carefully thought out.


I'm probably gonna get roasted for this, but:

Making invisible characters syntactically significant in python was a bad idea

It's distracting, causes lots of subtle bugs for novices and, in my opinion, wasn't really needed. About the only code I've ever seen that didn't voluntarily follow some sort of decent formatting guide was from first-year CS students. And even if code doesn't follow "nice" standards, there are plenty of tools out there to coerce it into a more pleasing shape.


Software Development is a VERY small subset of Computer Science.

People sometimes seem to think the two are synonymous, but in reality there are so many aspects to computer science that the average developer rarely (if ever) gets exposed to. Depending on one's career goals, I think there are a lot of CS graduates out there who would probably have been better off with some sort of Software Engineering education.

I value education highly, have a BS in Computer science and am pursuing a MS in it part time, but I think that many people who obtain these degrees treat the degree as a means to an end and benefit very little. I know plenty of people who took the same Systems Software course I took, wrote the same assembler I wrote, and to this day see no value in what they did.


Source Control: Anything But SourceSafe

Also: Exclusive locking is evil.

I once worked somewhere where they argued that exclusive locks meant that you were guaranteeing that people were not overwriting someone else's changes when you checked in. The problem was that in order to get any work done, if a file was locked devs would just change their local file to writable and merging (or overwriting) the source control with their version when they had the chance.


Avoid indentation.

Use early returns, continues or breaks.

instead of:

if (passed != NULL)
{
   for(x in list)
   {
      if (peter)
      {
          print "peter";
          more code.
          ..
          ..
      }
      else
      {
          print "no peter?!"
      }
   }
}

do:

if (pPassed==NULL)
    return false;

for(x in list)
{
   if (!peter)
   {
       print "no peter?!"
       continue;
   }

   print "peter";
   more code.
   ..
   ..
}

XHTML is evil. Write HTML

You will have to set the MIME type to text/html anyway, so why fooling yourself into believing that you are really writing XML? Whoever is going to download your page is going to believe that it is HTML, so make it HTML.

And with that, feel free and happy to not close your <li>, it isn't necessary. Don't close the html tag, the file is over anyway. It is valid HTML and it can be parsed perfectly.

It will create more readable, less boilerplate code and you don't lose a thing. HTML parsers work good!

And when you are done, move on to HTML5. It is better.


Null references should be removed from OO languages

Coming from a Java and C# background, where its normal to return null from a method to indicate a failure, I've come to conclude that nulls cause a lot of avoidable problems. Language designers can remove a whole class of errors relate to NullRefernceExceptions if they simply eliminate null references from code.

Additionally, when I call a method, I have no way of knowing whether that method can return null references unless I actually dig in the implementation. I'd like to see more languages follow F#'s model for handling nulls: F# doesn't allow programmers to return null references (at least for classes compiled in F#), instead it requires programmers to represent empty objects using option types. The nice thing about this design is how useful information, such as whether a function can return null references, is propagated through the type system: functions which return a type 'a have a different return type than functions which return 'a option.


There is no "one size fits all" approach to development

I'm surprised that this is a controversial opinion, because it seems to me like common sense. However, there are many entries on popular blogs promoting the "one size fits all" approach to development so I think I may actually be in the minority.

Things I've seen being touted as the correct approach for any project - before any information is known about it - are things like the use of Test Driven Development (TDD), Domain Driven Design (DDD), Object-Relational Mapping (ORM), Agile (capital A), Object Orientation (OO), etc. etc. encompassing everything from methodologies to architectures to components. All with nice marketable acronyms, of course.

People even seem to go as far as putting badges on their blogs such as "I'm Test Driven" or similar, as if their strict adherence to a single approach whatever the details of the project project is actually a good thing.

It isn't.

Choosing the correct methodologies and architectures and components, etc., is something that should be done on a per-project basis, and depends not only on the type of project you're working on and its unique requirements, but also the size and ability of the team you're working with.


Generated documentation is nearly always totally worthless.

Or, as a corollary: Your API needs separate sets of documentation for maintainers and users.

There are really two classes of people who need to understand your API: maintainers, who must understand the minutiae of your implementation to be effective at their job, and users, who need a high-level overview, examples, and thorough details about the effects of each method they have access to.

I have never encountered generated documentation that succeeded in either area. Generally, when programmers write comments for tools to extract and make documentation out of, they aim for somewhere in the middle--just enough implementation detail to bore and confuse users yet not enough to significantly help maintainers, and not enough overview to be of any real assistance to users.

As a maintainer, I'd always rather have clean, clear comments, unmuddled by whatever strange markup your auto-doc tool requires, that tell me why you wrote that weird switch statement the way you did, or what bug this seemingly-redundant parameter check fixes, or whatever else I need to know to actually keep the code clean and bug-free as I work on it. I want this information right there in the code, adjacent to the code it's about, so I don't have to hunt down your website to find it in a state that lends itself to being read.

As a user, I'd always rather have a thorough, well-organized document (a set of web pages would be ideal, but I'd settle for a well-structured text file, too) telling me how your API is architectured, what methods do what, and how I can accomplish what I want to use your API to do. I don't want to see internally what classes you wrote to allow me to do work, or files they're in for that matter. And I certainly don't want to have to download your source so I can figure out exactly what's going on behind the curtain. If your documentation were good enough, I wouldn't have to.

That's how I see it, anyway.


Software development is an art.


Sometimes it's okay to use regexes to extract something from HTML. Seriously, wrangle with an obtuse parser, or use a quick regex like /<a href="([^"]+)">/? It's not perfect, but your software will be up and running much quicker, and you can probably use yet another regex to verify that the match that was extracted is something that actually looks like a URL. Sure, it's hacky, and probably fails on several edge-cases, but it's good enough for most usage.

Based on the massive volume of "How use regex get HTML?" questions that get posted here almost daily, and the fact that every answer is "Use an HTML parser", this should be controversial enough.


Relational Databases are a waste of time. Use object databases instead!

Relational database vendors try to fool us into believing that the only scaleable, persistent and safe storage in the world is relational databases. I am a certified DBA. Have you ever spent hours trying to optimize a query and had no idea what was going wrong? Relational databases don't let you make your own search paths when you need them. You give away much of the control over the speed of your app into the hands of people you've never met and they are not as smart as you think.

Sure, sometimes in a well-maintained database they come up with a quick answer for a complex query. But the price you pay for this is too high! You have to choose between writing raw SQL every time you want to read an entry of your data, which is dangerous. Or use an Object relational mapper which adds more complexity and things outside your control.

More importantly, you are actively forbidden from coming up with smart search algorithms, because every damn roundtrip to the database costs you around 11ms. It is too much. Imagine you know this super-graph algorithm which will answer a specific question, which might not even be expressible in SQL!, in due time. But even if your algorithm is linear, and interesting algorithms are not linear, forget about combining it with a relational database as enumerating a large table will take you hours!

Compare that with SandstoneDb, or Gemstone for Smalltalk! If you are into Java, give db4o a shot.

So, my advice is: Use an object-DB. Sure, they aren't perfect and some queries will be slower. But you will be surprised how many will be faster. Because loading the objects will not require all these strange transofmations between SQL and your domain data. And if you really need speed for a certain query, object databases have the query optimizer you should trust: your brain.


Managers know everything

It's been my experience that managers didn't get there by knowing code usually. No matter what you tell them it's too long, not right or too expensive.

And another that follows on from the first:

There's never time to do it right but there's always time to do it again

A good engineer friend once said that in anger to describe a situation where management halved his estimates, got a half-assed version out of him then gave him twice as much time to rework it because it failed. It's a fairly regular thing in the commercial software world.

And one that came to mind today while trying to configure a router with only a web interface:

Web interfaces are for suckers

The CLI on the previous version of the firmware was oh so nice. This version has a web interface, which attempts to hide all of the complexity of networking from clueless IT droids, and can't even get VLANs correct.


Developing on .NET is not programming. Its just stitching together other people's code.

Having come from a coding background where you were required to know the hardware, and where this is still a vital requirements in my industry, I view high level languages as simply assembling someone else's work. Nothing essentially wrong with this, but is it 'programming'?

MS has made a mint from doing the hard work and presenting 'developers' with symbolic instruction syntax. I seem to now know more and more developers who appear constrained by the existence or non-existence of a class to perform a job.

My opinion comes from the notion that to be a programmer you should be able to programme at the lowest level your platform allows. So if you're programming .NET then you need to be able to stick your head under the hood and work out the solution, rather than rely on someone else creating a class for you. That's simply lazy and does not qualify as 'development' in my book.


A picture is not worth a thousand words.

Some pictures might be worth a thousand words. Most of them are not. This trite old aphorism is mostly untrue and is a pathetic excuse for many a lazy manager who did not want to read carefully created reports and documentation to say "I need you to show me in a diagram."

My wife studied for a linguistics major and saw several fascinating proofs against the conventional wisdom on pictures and logos: they do not break across language and cultural barriers, they usually do not communicate anywhere near as much information as correct text, they simply are no substitute for real communication.

In particular, labeled bubbles connected with lines are useless if the lines are unlabeled and unexplained, and/or if every line has a different meaning instead of signifying the same relationship (unless distinguished from each other in some way). If your lines sometimes signify relationships and sometimes indicate actions and sometimes indicate the passage of time, you're really hosed.

Every good programmer knows you use the tool suited for the job at hand, right? Not all systems are best specified and documented in pictures. Graphical specification languages that can be automatically turned into provably-correct, executable code or whatever are a spectacular idea, if such things exist. Use them when appropriate, not for everything under the sun. Entity-Relationship diagrams are great. But not everything can be summed up in a picture.

Note: a table may be worth its weight in gold. But a table is not the same thing as a picture. And again, a well-crafted short prose paragraph may be far more suitable for the job at hand.


Software is like toilet paper. The less you spend on it, the bigger of a pain in the ass it is.

That is to say, outsourcing is rarely a good idea.

I've always figured this to be true, but I never really knew the extent of it until recently. I have been "maintaining" (read: "fixing") some off-shored code recently, and it is a huge mess. It is easily costing our company more than the difference had it been developed in-house.

People outside your business will inherently know less about your business model, and therefore will not do as good a job programming any system that works within your business. Also, they know they won't have to support it, so there's no incentive to do anything other than half-ass it.


You need to watch out for Object-Obsessed Programmers.

e.g. if you write a class that models built-in types such as ints or floats, you may be an object-obsessed programmer.


1. You should not follow web standards - all the time.

2. You don't need to comment your code.

As long as it's understandable by a stranger.


SESE (Single Entry Single Exit) is not law

Example:

public int foo() {
   if( someCondition ) {
      return 0;
   }

   return -1;
}

vs:

public int foo() {
   int returnValue = -1;

   if( someCondition ) {
      returnValue = 0;
   }

   return returnValue;
}

My team and I have found that abiding by this all the time is actually counter-productive in many cases.


XML is highly overrated

I think too many jump onto the XML bandwagon before using their brains... XML for web stuff is great, as it's designed for it. Otherwise I think some problem definition and design thoughts should preempt any decision to use it.

My 5 cents


Globals and/or Singletons are not inherently evil

I come from more of a sysadmin, shell, Perl (and my "real" programming), PHP type background; last year I was thrown into a Java development gig.

Singletons are evil. Globals are so evil they are not even allowed. Yet, Java has things like AOP, and now various "Dependency Injection" frameworks (we used Google Guice). AOP less so, but DI things for sure give you what? Globals. Uhh, thanks.


Modern C++ is a beautiful language.

There, I said it. A lot of people really hate C++, but honestly, I find modern C++ with STL/Boost style programming to be a very expressive, elegant, and incredibly productive language most of the time.

I think most people who hate C++ are basing that on bad experiences with OO. C++ doesn't do OO very well because polymorphism often depends on heap-allocated objects, and C++ doesn't have automatic garbage collection.

But C++ really shines when it comes to generic libraries and functional-programming techniques which make it possible to build incredibly large, highly-maintainable systems. A lot of people say C++ tries to do everything, but ends up doing nothing very well. I'd probably agree that it doesn't do OO as well as other languages, but it does generic programming and functional programming better than any other mainstream C-based language. (C++0x will only further underscore this truth.)

I also appreciate how C++ lets me get low-level if necessary, and provides full access to the operating system.

Plus RAII. Seriously. I really miss destructors when I program in other C-based languages. (And no, garbage collection does not make destructors useless.)


A majority of the 'user-friendly' Fourth Generation Languages (SQL included) are worthless overrated pieces of rubbish that should have never made it to common use.

4GLs in general have a wordy and ambiguous syntax. Though 4GLs are supposed to allow 'non technical people' to write programs, you still need the 'technical' people to write and maintain them anyway.

4GL programs in general are harder to write, harder to read and harder to optimize than.

4GLs should be avoided as far as possible.


Relational database systems will be the best thing since sliced bread...

... when we (hopefully) get them, that is. SQL databases suck so hard it's not funny.

What I find amusing (if sad) is certified DBAs who think an SQL database system is a relational one. Speaks volumes for the quality of said certification.

Confused? Read C. J. Date's books.

edit

Why is it called Relational and what does that word mean?

These days, a programmer (or a certified DBA, wink) with a strong (heck, any) mathematical background is an exception rather than the common case (I'm an instance of the common case as well). SQL with its tables, columns and rows, as well as the joke called Entity/Relationship Modelling just add insult to the injury. No wonder the misconception that Relational Database Systems are called that because of some Relationships (Foreign Keys?) between Entities (tables) is so pervasive.

In fact, Relational derives from the mathematical concept of relations, and as such is intimately related to set theory and functions (in the mathematical, not any programming, sense).

[http://en.wikipedia.org/wiki/Finitary_relation][2]:

In mathematics (more specifically, in set theory and logic), a relation is a property that assigns truth values to combinations (k-tuples) of k individuals. Typically, the property describes a possible connection between the components of a k-tuple. For a given set of k-tuples, a truth value is assigned to each k-tuple according to whether the property does or does not hold.

An example of a ternary relation (i.e., between three individuals) is: "X was-introduced-to Y by Z", where (X,Y,Z) is a 3-tuple of persons; for example, "Beatrice Wood was introduced to Henri-Pierre Roché by Marcel Duchamp" is true, while "Karl Marx was introduced to Friedrich Engels by Queen Victoria" is false.

Wikipedia makes it perfectly clear: in a SQL DBMS, such a ternary relation would be a "table", not a "foreign key" (I'm taking the liberty to rename the "columns" of the relation: X = who, Y = to, Z = by):

CREATE TABLE introduction (
  who INDIVIDUAL NOT NULL
, to INDIVIDUAL NOT NULL
, by INDIVIDUAL NOT NULL
, PRIMARY KEY (who, to, by)
);

Also, it would contain (among others, possibly), this "row":

INSERT INTO introduction (
  who
, to
, by
) VALUES (
  'Beatrice Wood'
, 'Henri-Pierre Roché'
, 'Marcel Duchamp'
);

but not this one:

INSERT INTO introduction (
  who
, to
, by
) VALUES (
  'Karl Marx'
, 'Friedrich Engels'
, 'Queen Victoria'
);

Relational Database Dictionary:

relation (mathematics) Given sets s1, s2, ..., sn, not necessarily distinct, r is a relation on those sets if and only if it's a set of n-tuples each of which has its first element from s1, its second element from s2, and so on. (Equivalently, r is a subset of the Cartesian product s1 x s2 x ... x sn.)

Set si is the ith domain of r (i = 1, ..., n). Note: There are several important logical differences between relations in mathematics and their relational model counterparts. Here are some of them:

  • Mathematical relations have a left-to-right ordering to their attributes.
  • Actually, mathematical relations have, at best, only a very rudimentary concept of attributes anyway. Certainly their attributes aren't named, other than by their ordinal position.
  • As a consequence, mathematical relations don't really have either a heading or a type in the relational model sense.
  • Mathematical relations are usually either binary or, just occasionally, unary. By contrast, relations in the relational model are of degree n, where n can be any nonnegative integer.
  • Relational operators such as JOIN, EXTEND, and the rest were first defined in the context of the relational model specifically; the mathematical theory of relations includes few such operators.

And so on (the foregoing isn't meant to be an exhaustive list).


Opinion: Data driven design puts the cart before the horse. It should be eliminated from our thinking forthwith.

The vast majority of software isn't about the data, it's about the business problem we're trying to solve for our customers. It's about a problem domain, which involves objects, rules, flows, cases, and relationships.

When we start our design with the data, and model the rest of the system after the data and the relationships between the data (tables, foreign keys, and x-to-x relationships), we constrain the entire application to how the data is stored in and retrieved from the database. Further, we expose the database architecture to the software.

The database schema is an implementation detail. We should be free to change it without having to significantly alter the design of our software at all. The business layer should never have to know how the tables are set up, or if it's pulling from a view or a table, or getting the table from dynamic SQL or a stored procedure. And that type of code should never appear in the presentation layer.

Software is about solving business problems. We deal with users, cars, accounts, balances, averages, summaries, transfers, animals, messsages, packages, carts, orders, and all sorts of other real tangible objects, and the actions we can perform on them. We need to save, load, update, find, and delete those items as needed. Sometimes, we have to do those things in special ways.

But there's no real compelling reason that we should take the work that should be done in the database and move it away from the data and put it in the source code, potentially on a separate machine (introducing network traffic and degrading performance). Doing so means turning our backs on the decades of work that has already been done to improve the performance of stored procedures and functions built into databases. The argument that stored procedures introduce "yet another API" to be manged is specious: of course it does; that API is a facade that shields you from the database schema, including the intricate details of primary and foreign keys, transactions, cursors, and so on, and it prevents you from having to splice SQL together in your source code.

Put the horse back in front of the cart. Think about the problem domain, and design the solution around it. Then, derive the data from the problem domain.


Jon Bentley's 'Programming Pearls' is no longer a useful tome.

http://tinyurl.com/nom56r


Two brains think better than one

I firmly believe that pair programming is the number one factor when it comes to increasing code quality and programming productivity. Unfortunatly it is also a highly controversial for management who believes that "more hands => more code => $$$!"


Code as Design: Three Essays by Jack W. Reeves

The source code of any software is its most accurate design document. Everything else (specs, docs, and sometimes comments) is either incorrect, outdated or misleading.

Guaranteed to get you fired pretty much everywhere.


If you only know one language, no matter how well you know it, you're not a great programmer.

There seems to be an attitude that says once you're really good at C# or Java or whatever other language you started out learning then that's all you need. I don't believe it- every language I have ever learned has taught me something new about programming that I have been able to bring back into my work with all the others. I think that anyone who restricts themselves to one language will never be as good as they could be.

It also indicates to me a certain lack of inquistiveness and willingness to experiment that doesn't necessarily tally with the qualities I would expect to find in a really good programmer.


Understanding "what" to do is at least as important as knowing "how" to do it, and almost always it's much more important than knowing the 'best' way to solve a problem. Domain-specific knowledge is often crucial to write good software.


If you're a developer, you should be able to write code

I did quite a bit of interviewing last year, and for my part of the interview I was supposed to test the way people thought, and how they implemented simple-to-moderate algorithms on a white board. I'd initially started out with questions like:

Given that Pi can be estimated using the function 4 * (1 - 1/3 + 1/5 - 1/7 + ...) with more terms giving greater accuracy, write a function that calculates Pi to an accuracy of 5 decimal places.

It's a problem that should make you think, but shouldn't be out of reach to a seasoned developer (it can be answered in about 10 lines of C#). However, many of our (supposedly pre-screened by the agency) candidates couldn't even begin to answer it, or even explain how they might go about answering it. So after a while I started asking simpler questions like:

Given the area of a circle is given by Pi times the radius squared, write a function to calculate the area of a circle.

Amazingly, more than half the candidates couldn't write this function in any language (I can read most popular languages so I let them use any language of their choice, including pseudo-code). We had "C# developers" who could not write this function in C#.

I was surprised by this. I had always thought that developers should be able to write code. It seems that, nowadays, this is a controversial opinion. Certainly it is amongst interview candidates!


Edit:

There's a lot of discussion in the comments about whether the first question is a good or bad one, and whether you should ask questions as complex as this in an interview. I'm not going to delve into this here (that's a whole new question) apart from to say you're largely missing the point of the post.

Yes, I said people couldn't make any headway with this, but the second question is trivial and many people couldn't make any headway with that one either! Anybody who calls themselves a developer should be able to write the answer to the second one in a few seconds without even thinking. And many can't.


There is only one design pattern: encapsulation

For example:

  • Factory method: you've encapsulated object creation
  • Strategy: you encapsulated different changeable algorithms
  • Iterator: you encapsulated the way to sequentially access the elements in the collection.

You don't always need a database.

If you need to store less than a few thousand "things" and you don't need locking, flat files can work and are better in a lot of ways. They are more portable, and you can hand edit them in a pinch. If you have proper separation between your data and business logic, you can easily replace the flat files with a database if your app ever needs it. And if you design it with this in mind, it reminds you to have proper separation between your data and business logic.

--
bmb


The best programmers trace all their code in the debugger and test all paths.

Well... the OP said controversial!


Design patterns are hurting good design more than they're helping it.

IMO software design, especially good software design is far too varied to be meaningfully captured in patterns, especially in the small number of patterns people can actually remember - and they're far too abstract for people to really remember more than a handful. So they're not helping much.

And on the other hand, far too many people become enamoured with the concept and try to apply patterns everywhere - usually, in the resulting code you can't find the actual design between all the (completely meaningless) Singletons and Abstract Factories.


I firmly believe that unmanaged code isn't worth the trouble. The extra maintainability expenses associated with hunting down memory leaks which even the best programmers introduce occasionally far outweigh the performance to be gained from a language like C++. If Java, C#, etc. can't get the performance you need, buy more machines.


Programming: It's a fun job.

I seem to see two generalized groups of developers. Those that don't love it but they are competent and the money is good. The other group that love it to a point that is kinda creepy. It seems to be their life.

I just think it well paying job that is usually interesting and fun. There is all kinds of room to learn something new every minute of every day. I can't think of another job I would prefer. But it is still a job. Compromises will be made and the stuff you produce will not always be as good as it could be.

Given my druthers would be on a beach drinking beer or playing with my kids.


Social skills matter more than technical skills

Agreable but average programmers with good social skills will have a more successful carreer than outstanding programmers who are disagreable people.


That, erm, people should comment their code? It seems to be pretty controversial around here...

The code only tells me what actually it does; not what it was supposed to do

The time I see a function calculating the point value of an Australian Bond Future is the time I want to see some comments that indicate what the coder thought the calculation should be!


VB 6 could be used for good as well as evil. It was a Rapid Application Development environment in a time of over complicated coding.

I have hated VB vehemently in the past, and still mock VB.NET (probably in jest) as a Fisher Price language due to my dislike of classical VB, but in its day, nothing could beat it for getting the job done.


Nobody Cares About Your Code

If you don't work on a government security clearance project and you're not in finance, odds are nobody cares what you're working on outside of your company/customer base. No one's sniffing packets or trying to hack into your machine to read your source code. This doesn't mean we should be flippant about security, because there are certainly a number of people who just want to wreak general havoc and destroy your hard work, or access stored information your company may have such as credit card data or identity data in bulk. However, I think people are overly concerned about other people getting access to your source code and taking your ideas.


"Good Coders Code and Great Coders Reuse It" This is happening right now But "Good Coder" is the only ONE who enjoy that code. and "Great Coders" are for only to find out the bug in to that because they don't have the time to think and code. But they have time for find the bug in that code.

so don't criticize!!!!!!!!

Create your own code how YOU want.


XML is highly overrated

I think too many jump onto the XML bandwagon before using their brains... XML for web stuff is great, as it's designed for it. Otherwise I think some problem definition and design thoughts should preempt any decision to use it.

My 5 cents


Rob Pike wrote: "Data dominates. If you've chosen the right data structures and organized things well, the algorithms will almost always be self-evident. Data structures, not algorithms, are central to programming."

And since these days any serious data is in the millions of records, I content that good data modeling is the most important programming skill (whether using a rdbms or something like sqlite or amazon simpleDB or google appengine data storage.)

Fancy search and sorting algorithms aren't needed any more when the data, all the data, is stored in such a data storage system.


...That the "clarification of ideas" should not be the sole responsibility of the developer...and yes xkcd made me use that specific phrase...

To often we are handed project's that are specified in psuedo-meta-sorta-kinda-specific "code" if you want to call it that. There are often product managers who draw up the initial requiements for a project and perform next to 0% of basic logic validation.

I'm not saying that the technical approach shouldn't be drawn up by the architect, or that the speicifc implemntation shouldn't be the responsibility of the developer, but rather that it should the requirement of the product manager to ensure that their requirements are logically feasible.

Personally I've been involved in too many "simple" projects that encounter a little scope creep here and there and then come across a "small" change or feature addition which contradicts previous requirements--whether implicitly or explicitly. In these cases it is all too easy for the person requesting the borderline-impossible change to become enraged that developers can't make their dream a reality.


Reuse of code is inversely proportional to its "reusability". Simply because "reusable" code is more complex, whereas quick hacks are easy to understand, so they get reused.

Software failures should take down the system, so that it can be examined and fixed. Software attempting to handle failure conditions is often worse than crashing. ie, is it better to have a system reset after crashing, or should it be indefinitely hung because the failure handler has a bug?


Commenting is bad

Whenever code needs comments to explain what it is doing, the code is too complicated. I try to always write code that is self-explanatory enough to not need very many comments.


Usability problems are never the user's fault.

I cannot count how often a problem turned up when some user did something that everybody in the team considered "just a stupid thing to do". Phrases like "why would somebody do that?" or "why doesn't he just do XYZ" usually come up.

Even though many are weary of hearing me say this: if a real-life user tried to do something that either did not work, caused something to go wrong or resulted in unexpected behaviour, then it can be anybody's fault, but not the user's!

Please note that I do not mean people who intentionally misuse the software. I am referring to the presumable target group of the software.


Don't comment your code

Comments are not code and therefore when things change it's very easy to not change the comment that explained the code. Instead I prefer to refactor the crap out of code to a point that there is no reason for a comment. An example:

if(data == null)  // First time on the page

to:

bool firstTimeOnPage = data == null;
if(firstTimeOnPage)

The only time I really comment is when it's a TODO or explaining why

Widget.GetData(); // only way to grab data, TODO: extract interface or wrapper

According to the amount of feedback I've gotten, my most controversial opinion, apparently, is that programmers don't always read the books they claim to have read. This is followed closely by my opinion that a programmer with a formal education is better than the same programmer who is self-taught (but not necessarily better than a different programmer who is self-taught).


Defects and Enhancement Requests are the Same

Unless you are developing software on a fixed-price contract, there should be no difference when prioritizing your backlog between "bugs" and "enhancements" and "new feature" requests. OK - maybe that's not controversial, but I have worked on enterprise IT projects where the edict was that "all open bugs must be fixed in the next release", even if that left no developer time for the most desirable new features. So, a problem which was encountered by 1% of the users, 1% of the time took precedence over a new feature would might be immediately useful to 90% of the users. I like to take my entire project backlog, put estimates around each item and take it to the user community for prioritization - with items not classified as "defect", "enhancement", etc.


Web applications suck

My Internet connection is veeery slow. My experience with almost every Web site that is not Google is, at least, frustrating. Why doesn't anybody write desktop apps anymore? Oh, I see. Nobody wants to be bothered with learning how operating systems work. At least, not Windows. The last time you had to handle WM_PAINT, your head exploded. Creating a worker thread to perform a long task (I mean, doing it the Windows way) was totally beyond you. What the hell was a callback? Oh, my God!


Garbage collection sucks

No, it actually doesn't. But it makes the programmers suck like nothing else. In college, the first language they taught us was Visual Basic (the original one). After that, there was another course where the teachers pretended they taught us C++. But the damage was done. Nobody actually knew how to use this esoteric keyword delete did. After testing our programs, we either got invalid address exceptions or memory leaks. Sometimes, we got both. Among the 1% of my faculty who can actually program, only one who can manage his memory by himself (at least, he pretends) and he's writing this rant. The rest write their programs in VB.NET, which, by definition, is a bad language.


Dynamic typing suck

Unless you're using assembler, of course (that's the kind of dynamic typing that actually deserves praise). What I meant is the overhead imposed by dynamic, interpreted languages makes them suck. And don't come with that silly argument that different tools are good for different jobs. C is the right language for almost everything (it's fast, powerful and portable), and, when it isn't (it's not fast enough), there's always inline assembly.


I might come up with more rants, but that will be later, not now.


Relational databases are awful for web applications.

For example:

  • threaded comments
  • tag clouds
  • user search
  • maintaining record view counts
  • providing undo / revision tracking
  • multi-step wizards

A degree in computer science does not---and is not supposed to---teach you to be a programmer.

Programming is a trade, computer science is a field of study. You can be a great programmer and a poor computer scientist and a great computer scientist and an awful programmer. It is important to understand the difference.

If you want to be a programmer, learn Java. If you want to be a computer scientist, learn at least three almost completely different languages. e.g. (assembler, c, lisp, ruby, smalltalk)


Managers know everything

It's been my experience that managers didn't get there by knowing code usually. No matter what you tell them it's too long, not right or too expensive.

And another that follows on from the first:

There's never time to do it right but there's always time to do it again

A good engineer friend once said that in anger to describe a situation where management halved his estimates, got a half-assed version out of him then gave him twice as much time to rework it because it failed. It's a fairly regular thing in the commercial software world.

And one that came to mind today while trying to configure a router with only a web interface:

Web interfaces are for suckers

The CLI on the previous version of the firmware was oh so nice. This version has a web interface, which attempts to hide all of the complexity of networking from clueless IT droids, and can't even get VLANs correct.


I also think there's nothing wrong with having binaries in source control.. if there is a good reason for it. If I have an assembly I don't have the source for, and might not necessarily be in the same place on each devs machine, then I will usually stick it in a "binaries" directory and reference it in a project using a relative path.

Quite a lot of people seem to think I should be burned at the stake for even mentioning "source control" and "binary" in the same sentence. I even know of places that have strict rules saying you can't add them.


I think Java should have supported system-specific features via thin native library wrappers.

Phrased another way, I think Sun's determination to require that Java only support portable features was a big mistake from almost everyone's perspective.

A zillion years later, SWT came along and solved the basic problem of writing a portable native-widget UI, but by then Microsoft was forced to fork Java into C# and lots of C++ had been written that could otherwise have been done in civilized Java. Now the world runs on a blend of C#, VB, Java, C++, Ruby, Python and Perl. All the Java programs still look and act wierd except for the SWT ones.

If Java had come out with thin wrappers around native libraries, people could have written the SWT-equivalent entirely in Java, and we could have, as things evolved, made portable apparently-native apps in Java. I'm totally for portable applications, but it would have been better if that portability were achieved in an open market of middleware UI (and other feature) libraries, and not through simply reducing the user's menu to junk or faking the UI with Swing.

I suppose Sun thought that ISV's would suffer with Java's limitations and then all the world's new PC apps would magically run on Suns. Nice try. They ended up not getting the apps AND not having the language take off until we could use it for logic-only server back-end code.

If things had been done differently maybe the local application wouldn't be, well, dead.


Development projects are bound to fail unless the team of programmers is given as a whole complete empowerment to make all decisions related to the technology being used.


Microsoft Windows is the best platform for software development.

Reasoning: Microsoft spoils its developers with excellent and cheap development tools, the platform and its API's are well documented, the platform is evolving at a rappid rate which creates a lot of opportunities for developers, The OS has a large user base which is important for obvious commercial reasons, there is a big community of Windows developers, I haven't yet been fired for choosing Microsoft.


Getters and Setters are Highly Overused

I've seen millions of people claiming that public fields are evil, so they make them private and provide getters and setters for all of them. I believe this is almost identical to making the fields public, maybe a bit different if you're using threads (but generally is not the case) or if your accessors have business/presentation logic (something 'strange' at least).

I'm not in favor of public fields, but against making a getter/setter (or Property) for everyone of them, and then claiming that doing that is encapsulation or information hiding... ha!

UPDATE:

This answer has raised some controversy in it's comments, so I'll try to clarify it a bit (I'll leave the original untouched since that is what many people upvoted).

First of all: anyone who uses public fields deserves jail time

Now, creating private fields and then using the IDE to automatically generate getters and setters for every one of them is nearly as bad as using public fields.

Many people think:

private fields + public accessors == encapsulation

I say (automatic or not) generation of getter/setter pair for your fields effectively goes against the so called encapsulation you are trying to achieve.

Lastly, let me quote Uncle Bob in this topic (taken from chapter 6 of "Clean Code"):

There is a reason that we keep our variables private. We don't want anyone else to depend on them. We want the freedom to change their type or implementation on a whim or an impulse. Why, then, do so many programmers automatically add getters and setters to their objects, exposing their private fields as if they were public?


You'll never use enough languages, simply because every language is the best fit for only a tiny class of problems, and it's far too difficult to mix languages.

Pet examples: Java should be used only when the spec is very well thought out (because of lots of interdependencies meaning refactoring hell) and when working with concrete concepts. Perl should only be used for text processing. C should only be used when speed trumps everything, including flexibility and security. Key-value pairs should be used for one-dimensional data, CSV for two-dimensional data, XML for hierarchical data, and a DB for anything more complex.


Development is 80% about the design and 20% about coding

I believe that developers should spend 80% of time designing at the fine level of detail, what they are going to build and only 20% actually coding what they've designed. This will produce code with near zero bugs and save a lot on test-fix-retest cycle.

Getting to the metal (or IDE) early is like premature optimization, which is know to be a root of all evil. Thoughtful upfront design (I'm not necessarily talking about enormous design document, simple drawings on white board will work as well) will yield much better results than just coding and fixing.


As most others here, I try to adhere to principles like DRY and not being a human compiler.

Another strategy I want to push is "tell, don't ask". Instead of cluttering all objects with getters/setters essentially making a sieve of them, I'd like to tell them to do stuff.

This seems to got straight against good enterprise practices with dumb entity objects and thicker service layer(that does plenty of asking). Hmmm, thoughts?


We do a lot of development here using a Model-View-Controller framework we built. I'm often telling my developers that we need to violate the rules of the MVC design pattern to make the site run faster. This is a hard sell for developers, who are usually unwilling to sacrifice well-designed code for anything. But performance is our top priority in building web applications, so sometimes we have to make concessions in the framework.

For example, the view layer should never talk directly to the database, right? But if you are generating large reports, the app will use a lot of memory to pass that data up through the model and controller layers. If you have a database that supports cursors, it can make the app a lot faster to hit the database directly from the view layer.

Performance trumps development standards, that's my controversial view.


Inversion of control does not eliminate dependencies, but it sure does a great job of hiding them.


Member variables should never be declared private (in java)

If you declare something private, you prevent any future developer from deriving from your class and extending the functionality. Essentially, by writing "private" you are implying that you know more now about how your class can be used than any future developer might ever know. Whenever you write "private", you ought to write "protected" instead.

Classes should never be declared final (in java)

Similarly, if you declare a class as final (which prevents it from being extended -- prevents it from being used as a base class for inheritance), you are implying that you know more than any future programmer might know, about what is the right and proper way to use your class. This never a good idea. You don't know everything. Someone might come up with a perfectly suitable way to extend your class that you didn't think of.

Java Beans are a terrible idea.

The java bean convention -- declaring all members as private and then writing get() and set() methods for every member -- forces programmers to write boilerplate, error-prone, tedious, and lengthy code, where no code is needed. Just make public members variables public! Trust in your ability to change it later, if you need to change the implementation (hint: 99% of the time, you never will).


Software sucks due to a lack of diversity. No offense to any race but things work pretty when a profession is made up of different races and both genders. Just look at overusing non-renewable energy. It is going great because everyone is contributing, not just the "stereotypical guy"


To quote the late E. W. Dijsktra:

Programming is one of the most difficult branches of applied mathematics; the poorer mathematicians had better remain pure mathematicians.

Computer Science is no more about computers than astronomy is about telescopes.

I don't understand how one can claim to be a proper programmer without being able to solve pretty simple maths problems such as this one. A CRUD monkey - perhaps, but not a programmer.


Jon Bentley's 'Programming Pearls' is no longer a useful tome.

http://tinyurl.com/nom56r


We do a lot of development here using a Model-View-Controller framework we built. I'm often telling my developers that we need to violate the rules of the MVC design pattern to make the site run faster. This is a hard sell for developers, who are usually unwilling to sacrifice well-designed code for anything. But performance is our top priority in building web applications, so sometimes we have to make concessions in the framework.

For example, the view layer should never talk directly to the database, right? But if you are generating large reports, the app will use a lot of memory to pass that data up through the model and controller layers. If you have a database that supports cursors, it can make the app a lot faster to hit the database directly from the view layer.

Performance trumps development standards, that's my controversial view.


The vast majority of software being developed does not involve the end-user when gathering requirements.

Usually it's just some managers who are providing 'requirements'.


Keep your business logic out of the DB. Or at a minimum, keep it very lean. Let the DB do what it's intended to do. Let code do what code is intended to do. Period.

If you're a one man show (basically, arrogant & egotistical, not listening to the wisdom of others just because you're in control), do as you wish. I don't believe you're that way since you're asking to begin with. But I've met a few when it comes to this subject and felt the need to specify.

If you work with DBA's but do your own DB work, keep clearly defined partitions between your business objects, the gateway between them and the DB, and the DB itself.

If you work with DBA's and aren't allowed to do your DB work (either by policy or because they're premadonnas), you're very close to being a fool placing your reliance on them to get anything done by putting code-dependant business logic in your DB entities (sprocs, functions, etc.).

If you're a DBA, make developers keep their DB entities clean & lean.


XHTML is evil. Write HTML

You will have to set the MIME type to text/html anyway, so why fooling yourself into believing that you are really writing XML? Whoever is going to download your page is going to believe that it is HTML, so make it HTML.

And with that, feel free and happy to not close your <li>, it isn't necessary. Don't close the html tag, the file is over anyway. It is valid HTML and it can be parsed perfectly.

It will create more readable, less boilerplate code and you don't lose a thing. HTML parsers work good!

And when you are done, move on to HTML5. It is better.


Most consulting programmers suck and should not be allowed to write production code.

IMHO-Probably about 60% or more


Women make better programmers than men.

The female programmers I've worked with don't get wedded to "their" code as much as men do. They're much more open to criticism and new ideas.


Variable_Names_With_Bloody_Underscores

or even worse

CAPITALIZED_VARIABLE_NAMES_WITH_BLOODY_UNDERSCORES

should be globally expunged... with prejudice! CamelCapsAreJustFine. (Glolbal constants not withstanding)

GOTO statements are for use by developers under the age of 11

Any language that does not support pointers is not worthy of the name

.Net = .Bloat The finest example of microsoft's efforts for web site development (Expressionless Web 2) is the finest example of slow bloated cr@pw@re ever written. (try Web Studio instead)

Response: OK well let me address the Underscore issue a little. From the C link you provided:

-Global constants should be all caps with '_' separators. This I actually agree with because it is so BLOODY_OBVIOUS

-Take for example NetworkABCKey. Notice how the C from ABC and K from key are confused. Some people don't mind this and others just hate it so you'll find different policies in different code so you never know what to call something.

I fall into the former category. I choose names VERY carefully and if you cannot figure out in one glance that the K belongs to Key then english is probably not your first language.

  • C Function Names

    • In a C++ project there should be very few C functions.
    • For C functions use the GNU convention of all lower case letters with '_' as the word delimiter.

Justification

* It makes C functions very different from any C++ related names. 

Example

int some_bloody_function() { }

These "standards" and conventions are simply the arbitrary decisions handed down through time. I think that while they make a certain amount of logical sense, They clutter up code and make something that should be short and sweet to read, clumsy, long winded and cluttered.

C has been adopted as the de-facto standard, not because it is friendly, but because it is pervasive. I can write 100 lines of C code in 20 with a syntactically friendly high level language.

This makes the program flow easy to read, and as we all know, revisiting code after a year or more means following the breadcrumb trail all over the place.

I do use underscores but for global variables only as they are few and far between and they stick out clearly. Other than that, a well thought out CamelCaps() function/ variable name has yet to let me down!


The use of hungarian notation should be punished with death.

That should be controversial enough ;)


Haven't tested it yet for controversy, but there may be potential:

The best line of code is the one you never wrote.


If you haven't read a man page, you're not a real programmer.


C (or C++) should be the first programming language

The first language should NOT be the easy one, it should be one that sets up the student's mind and prepare it for serious computer science.
C is perfect for that, it forces students to think about memory and all the low level stuff, and at the same time they can learn how to structure their code (it has functions!)

C++ has the added advantage that it really sucks :) thus the students will understand why people had to come up with Java and C#


The only "best practice" you should be using all the time is "Use Your Brain".

Too many people jumping on too many bandwagons and trying to force methods, patterns, frameworks etc onto things that don't warrant them. Just because something is new, or because someone respected has an opinion, doesn't mean it fits all :)

EDIT: Just to clarify - I don't think people should ignore best practices, valued opinions etc. Just that people shouldn't just blindly jump on something without thinking about WHY this "thing" is so great, IS it applicable to what I'm doing, and WHAT benefits/drawbacks does it bring?


SQL could and should have been done better. Because its original spec was limited, various venders have been extending the language in different directions for years. SQL that is written for MS-SQL is different than SQL for Oracle, IBM, MySQL, Sybase, etc. Other serious languages (take C++ for example) were carefully standardized so that C++ written under one compiler will generally compile unmodified under another. Why couldn't SQL have been designed and standardized better?

HTML was a seriously broken choice as a browser display language. We've spent years extending it through CSS, XHTML, Javascript, Ajax, Flash, etc. in order to make a useable UI, and the result is still not as good as your basic thick-client windows app. Plus, a competent web programmer now needs to know three or four languages in order to make a decent UI.

Oh yeah. Hungarian notation is an abomination.


The latest design patterns tend to be so much snake oil. As has been said previously in this question, overuse of design patterns can harm a design much more than help it.

If I hear one more person saying that "everyone should be using IOC" (or some similar pile of turd), I think I'll be forced to hunt them down and teach them the error of their ways.


Most comments in code are in fact a pernicious form of code duplication.

We spend most of our time maintaining code written by others (or ourselves) and poor, incorrect, outdated, misleading comments must be near the top of the list of most annoying artifacts in code.

I think eventually many people just blank them out, especially those flowerbox monstrosities.

Much better to concentrate on making the code readable, refactoring as necessary, and minimising idioms and quirkiness.

On the other hand, many courses teach that comments are very nearly more important than the code itself, leading to the this next line adds one to invoiceTotal style of commenting.


You shouldn't settle on the first way you find to code something that "works."

I really don't think this should be controversial, but it is. People see an example from elsewhere in the code, from online, or from some old "Teach yourself Advanced Power SQLJava#BeansServer in 3.14159 minutes" book dated 1999, and they think they know something and they copy it into their code. They don't walk through the example to find out what each line does. They don't think about the design of their program and see if there might be a more organized or more natural way to do the same thing. They don't make any attempt at keeping their skill sets up to date to learn that they are using ideas and methods deprecated in the last year of the previous millenium. They don't seem to have the experience to learn that what they're copying has created specific horrific maintenance burdens for programmers for years and that they can be avoided with a little more thought.

In fact, they don't even seem to recognize that there might be more than one way to do something.

I come from the Perl world, where one of the slogans is "There's More Than One Way To Do It." (TMTOWTDI) People who've taken a cursory look at Perl have written it off as "write-only" or "unreadable," largely because they've looked at crappy code written by people with the mindset I described above. Those people have given zero thought to design, maintainability, organization, reduction of duplication in code, coupling, cohesion, encapsulation, etc. They write crap. Those people exist programming in every language, and easy to learn languages with many ways to do things give them plenty of rope and guns to shoot and hang themselves with. Simultaneously.

But if you hang around the Perl world for longer than a cursory look, and watch what the long-timers in the community are doing, you see a remarkable thing: the good Perl programmers spend some time seeking to find the best way to do something. When they're naming a new module, they ask around for suggestions and bounce their ideas off of people. They hand their code out to get looked at, critiqued, and modified. If they have to do something nasty, they encapsulate it in the smallest way possible in a module for use in a more organized way. Several implementations of the same idea might hang around for awhile, but they compete for mindshare and marketshare, and they compete by trying to do the best job, and a big part of that is by making themselves easily maintainable. Really good Perl programmers seem to think hard about what they are doing and looking for the best way to do things, rather than just grabbing the first idea that flits through their brain.

Today I program primarily in the Java world. I've seen some really good Java code, but I see a lot of junk as well, and I see more of the mindset I described at the beginning: people settle on the first ugly lump of code that seems to work, without understanding it, without thinking if there's a better way.

You will see both mindsets in every language. I'm not trying to impugn Java specifically. (Actually I really like it in some ways ... maybe that should be my real controversial opinion!) But I'm coming to believe that every programmer needs to spend a good couple of years with a TMTOWTDI-style language, because even though conventional wisdom has it that this leads to chaos and crappy code, it actually seems to produce people who understand that you need to think about the repercussions of what you are doing instead of trusting your language to have been designed to make you do the right thing with no effort.

I do think you can err too far in the other direction: i.e., perfectionism that totally ignores your true needs and goals (often the true needs and goals of your business, which is usually profitability). But I don't think anyone can be a truly great programmer without learning to invest some greater-than-average effort in thinking about finding the best (or at least one of the best) way to code what they are doing.


My controversial opinion? Java doesn't suck but Java API's do. Why do java libraries insist on making it hard to do simple tasks? And why, instead of fixing the APIs, do they create frameworks to help manage the boilerplate code? This opinion can apply to any language that requires 10 or more lines of code to read a line from a file.


You shouldn't settle on the first way you find to code something that "works."

I really don't think this should be controversial, but it is. People see an example from elsewhere in the code, from online, or from some old "Teach yourself Advanced Power SQLJava#BeansServer in 3.14159 minutes" book dated 1999, and they think they know something and they copy it into their code. They don't walk through the example to find out what each line does. They don't think about the design of their program and see if there might be a more organized or more natural way to do the same thing. They don't make any attempt at keeping their skill sets up to date to learn that they are using ideas and methods deprecated in the last year of the previous millenium. They don't seem to have the experience to learn that what they're copying has created specific horrific maintenance burdens for programmers for years and that they can be avoided with a little more thought.

In fact, they don't even seem to recognize that there might be more than one way to do something.

I come from the Perl world, where one of the slogans is "There's More Than One Way To Do It." (TMTOWTDI) People who've taken a cursory look at Perl have written it off as "write-only" or "unreadable," largely because they've looked at crappy code written by people with the mindset I described above. Those people have given zero thought to design, maintainability, organization, reduction of duplication in code, coupling, cohesion, encapsulation, etc. They write crap. Those people exist programming in every language, and easy to learn languages with many ways to do things give them plenty of rope and guns to shoot and hang themselves with. Simultaneously.

But if you hang around the Perl world for longer than a cursory look, and watch what the long-timers in the community are doing, you see a remarkable thing: the good Perl programmers spend some time seeking to find the best way to do something. When they're naming a new module, they ask around for suggestions and bounce their ideas off of people. They hand their code out to get looked at, critiqued, and modified. If they have to do something nasty, they encapsulate it in the smallest way possible in a module for use in a more organized way. Several implementations of the same idea might hang around for awhile, but they compete for mindshare and marketshare, and they compete by trying to do the best job, and a big part of that is by making themselves easily maintainable. Really good Perl programmers seem to think hard about what they are doing and looking for the best way to do things, rather than just grabbing the first idea that flits through their brain.

Today I program primarily in the Java world. I've seen some really good Java code, but I see a lot of junk as well, and I see more of the mindset I described at the beginning: people settle on the first ugly lump of code that seems to work, without understanding it, without thinking if there's a better way.

You will see both mindsets in every language. I'm not trying to impugn Java specifically. (Actually I really like it in some ways ... maybe that should be my real controversial opinion!) But I'm coming to believe that every programmer needs to spend a good couple of years with a TMTOWTDI-style language, because even though conventional wisdom has it that this leads to chaos and crappy code, it actually seems to produce people who understand that you need to think about the repercussions of what you are doing instead of trusting your language to have been designed to make you do the right thing with no effort.

I do think you can err too far in the other direction: i.e., perfectionism that totally ignores your true needs and goals (often the true needs and goals of your business, which is usually profitability). But I don't think anyone can be a truly great programmer without learning to invest some greater-than-average effort in thinking about finding the best (or at least one of the best) way to code what they are doing.


Neither Visual Basic or C# trumps the other. They are pretty much the same, save some syntax and formatting.


Programming is so easy a five year old can do it.

Programming in and of itself is not hard, it's common sense. You are just telling a computer what to do. You're not a genius, please get over yourself.


Inheritance is evil and should be deprecated.

The truth is aggregation is better in all cases. Static typed OOP languages can't avoid inheritance, it's the only way to describe what method wants from a type. But dynamic languages and duck typing can live without it. Ruby mixins is much more powerful then inheritance and a lot more controllable.


In my workplace, I've been trying to introduce more Agile/XP development habits. Continuous Design is the one I've felt most resistance on so far. Maybe I shouldn't have phrased it as "let's round up all of the architecture team and shoot them"... ;)


Manually halting a program is an effective, proven way to find performance problems.

Believable? Not to most. True? Absolutely.

Programmers are far more judgmental than necessary.

Witness all the things considered "evil" or "horrible" in these posts.

Programmers are data-structure-happy.

Witness all the discussions of classes, inheritance, private-vs-public, memory management, etc., versus how to analyze requirements.


For a good programmer language is not a problem.

It may be not very controvertial but I hear a lot o whining from other programmers like "why don't they all use delphi?", "C# sucks", "i would change company if they forced me to use java" and so on.
What i think is that a good programmer is flexible and is able to write good programms in any programming language that he might have to learn in his life


You don't always need a database.

If you need to store less than a few thousand "things" and you don't need locking, flat files can work and are better in a lot of ways. They are more portable, and you can hand edit them in a pinch. If you have proper separation between your data and business logic, you can easily replace the flat files with a database if your app ever needs it. And if you design it with this in mind, it reminds you to have proper separation between your data and business logic.

--
bmb


Two lines of code is too many.

If a method has a second line of code, it is a code smell. Refactor.


Procedural programming is fun. OOP is boring.


Software engineers should not work with computer science guys

Their differences : SEs care about code reusability, while CSs just suss out code SEs care about performance, while CSs just want to have things done now SEs care about whole structure, while CSs do not give a toss ...


Requirements analysis, specification, design, and documentation will almost never fit into a "template." You are 100% of the time better off by starting with a blank document and beginning to type with a view of "I will explain this in such a way that if I were dead and someone else read this document, they would know everything that I know and see and understand now" and then organizing from there, letting section headings and such develop naturally and fit the task you are specifying, rather than being constrained to some business or school's idea of what your document should look like. If you have to do a diagram, rather than using somebody's formal and incomprehensible system, you're often better off just drawing a diagram that makes sense, with a clear legend, which actually specifies the system you are trying to specify and communicates the information that the developer on the other end (often you, after a few years) needs to receive.

[If you have to, once you've written the real documentation, you can often shoehorn it into whatever template straightjacket your organization is imposing on you. You'll probably find yourself having to add section headings and duplicate material, though.]

The only time templates for these kinds of documents make sense is when you have a large number of tasks which are very similar in nature, differing only in details. "Write a program to allow single-use remote login access through this modem bank, driving the terminal connection nexus with C-Kermit," "Produce a historical trend and forecast report for capacity usage," "Use this library to give all reports the ability to be faxed," "Fix this code for the year 2000 problem," and "Add database triggers to this table to populate a software product provided for us by a third-party vendor" can not all be described by the same template, no matter what people may think. And for the record, the requirements and design diagramming techniques that my college classes attempted to teach me and my classmates could not be used to specify a simple calculator program (and everyone knew it).


A good developer needs to know more than just how to code


"Good Coders Code and Great Coders Reuse It" This is happening right now But "Good Coder" is the only ONE who enjoy that code. and "Great Coders" are for only to find out the bug in to that because they don't have the time to think and code. But they have time for find the bug in that code.

so don't criticize!!!!!!!!

Create your own code how YOU want.


Design patterns are a waste of time when it comes to software design and development.

Don't get me wrong, design patterns are useful but mainly as a communication vector. They can express complex ideas very concisely: factory, singleton, iterator...

But they shouldn't serve as a development method. Too often developers architect their code using a flurry of design pattern-based classes where a more concise design would be better, both in term of readability and performance. All that with the illusion that individual classes could be reused outside their domain. If a class is not designed for reuse or isn't part of the interface, then it's an implementation detail.

Design patterns should be used to put names on organizational features, not to dictate the way code must be written.

(It was supposed to be controversial, remember?)


Debuggers should be forbidden. This would force people to write code that is testable through unit tests, and in the end would lead to much better code quality.

Remove Copy & Paste from ALL programming IDEs. Copy & pasted code is very bad, this option should be completely removed. Then the programmer will hopefully be too lazy to retype all the code so he makes a function and reuses the code.

Whenever you use a Singleton, slap yourself. Singletons are almost never necessary, and are most of the time just a fancy name for a global variable.


UML diagrams are highly overrated

Of course there are useful diagrams e.g. class diagram for the Composite Pattern, but many UML diagrams have absolutely no value.


Regurgitating well known sayings by programming greats out of context with the zeal of a fanatic and the misplaced assumption that they are ironclad rules really gets my goat. For example 'premature optimization is the root of all evil' as covered by this thread.

IMO, many technical problems and solutions are very context sensitive and the notion of global best practices is a fallacy.


Generated documentation is nearly always totally worthless.

Or, as a corollary: Your API needs separate sets of documentation for maintainers and users.

There are really two classes of people who need to understand your API: maintainers, who must understand the minutiae of your implementation to be effective at their job, and users, who need a high-level overview, examples, and thorough details about the effects of each method they have access to.

I have never encountered generated documentation that succeeded in either area. Generally, when programmers write comments for tools to extract and make documentation out of, they aim for somewhere in the middle--just enough implementation detail to bore and confuse users yet not enough to significantly help maintainers, and not enough overview to be of any real assistance to users.

As a maintainer, I'd always rather have clean, clear comments, unmuddled by whatever strange markup your auto-doc tool requires, that tell me why you wrote that weird switch statement the way you did, or what bug this seemingly-redundant parameter check fixes, or whatever else I need to know to actually keep the code clean and bug-free as I work on it. I want this information right there in the code, adjacent to the code it's about, so I don't have to hunt down your website to find it in a state that lends itself to being read.

As a user, I'd always rather have a thorough, well-organized document (a set of web pages would be ideal, but I'd settle for a well-structured text file, too) telling me how your API is architectured, what methods do what, and how I can accomplish what I want to use your API to do. I don't want to see internally what classes you wrote to allow me to do work, or files they're in for that matter. And I certainly don't want to have to download your source so I can figure out exactly what's going on behind the curtain. If your documentation were good enough, I wouldn't have to.

That's how I see it, anyway.


Before January 1st 1970, true and false were the other way around...


To quote the late E. W. Dijsktra:

Programming is one of the most difficult branches of applied mathematics; the poorer mathematicians had better remain pure mathematicians.

Computer Science is no more about computers than astronomy is about telescopes.

I don't understand how one can claim to be a proper programmer without being able to solve pretty simple maths problems such as this one. A CRUD monkey - perhaps, but not a programmer.


Singletons are not evil

There is a place for singletons in the real world, and methods to get around them (i.e. monostate pattern) are simply singletons in disguise. For instance, a Logger is a perfect candidate for a singleton. Addtionally, so is a message pump. My current app uses distributed computing, and different objects need to be able to send appropriate messages. There should only be one message pump, and everyone should be able to access it. The alternative is passing an object to my message pump everywhere it might be needed and hoping that a new developer doesn't new one up without thinking and wonder why his messages are going nowhere. The uniqueness of the singleton is the most important part, not its availability. The singleton has its place in the world.


I have a few... there's exceptions to everything so these are not hard and fast but they do apply in most cases

Nobody cares if your website validates, is XHTML strict, is standards-compliant, or has a W3C badge.

It may earn you some high-fives from fellow Web developers, but the rest of people looking at your site could give a crap whether you've validated your code or not. the vast majority of Web surfers are using IE or Firefox, and since both of those browsers are forgiving of nonstandards, nonstrict, invalidated HTML then you really dont need to worry about it. If you've built a site for a car dealer, a mechanic, a radio station, a church, or a local small business, how many people in any of those businesses' target demographics do you think care about valid HTML? I'd hazard a guess it's pretty close to 0.

Most open-source software is useless, overcomplicated crap.

Let me install this nice piece of OSS I've found. It looks like it should do exactly what I want! Oh wait, first I have to install this other window manager thingy. OK. Then i need to get this command-line tool and add it to my path. Now I need the latest runtimes for X, Y, and Z. now i need to make sure i have these processes running. ok, great... its all configured. Now let me learn a whole new set of commands to use it. Oh cool, someone built a GUI for it. I guess I don't need to learn these commands. Wait, I need this library on here to get the GUI to work. Gotta download that now. ok, now its working...crap, I can't figure out this terrible UI.

sound familiar? OSS is full of complication for complication's sake, tricky installs that you need to be an expert to perform, and tools that most people wouldn't know what to do with anyway. So many projects fall by the wayside, others are so niche that very few people would use them, and some of the decent ones (FlowPlayer, OSCommerce, etc) have such ridiculously overcomplicated and bloated source code that it defeats the purpose of being able to edit the source. You can edit the source... if you can figure out which of the 400 files contains the code that needs modification. You're really in trouble when you learn that its all 400 of them.


Less code is better than more!

If the users say "that's it?", and your work remains invisible, it's done right. Glory can be found elsewhere.


You can't measure productivity by counting lines of code.

Everyone knows this, but for some reason the practice still persists!


Respect the Single Responsibility Principle

At first glance you might not think this would be controversial, but in my experience when I mention to another developer that they shouldn't be doing everything in the page load method they often push back ... so for the children please quit building the "do everything" method we see all to often.


When someone dismisses an entire programming language as "clumsy", it usually turns out he doesn't know how to use it.


Sometimes jumping on the bandwagon is ok

I get tired of people exhibiting "grandpa syndrome" ("You kids and your newfangled Test Driven Development. Every big technology that's come out in the last decade has sucked. Back in my day, we wrote real code!"... you get the idea).

Sometimes things that are popular are popular for a reason.


C++ is one of the WORST programming languages - EVER.

It has all of the hallmarks of something designed by committee - it does not do any given job well, and does some jobs (like OO) terribly. It has a "kitchen sink" desperation to it that just won't go away.

It is a horrible "first language" to learn to program with. You get no elegance, no assistance (from the language). Instead you have bear traps and mine fields (memory management, templates, etc.).

It is not a good language to try to learn OO concepts. It behaves as "C with a class wrapper" instead of a proper OO language.

I could go on, but will leave it at that for now. I have never liked programming in C++, and although I "cut my teeth" on FORTRAN, I totally loved programming in C. I still think C was one of the great "classic" languages. Something that C++ is certainly NOT, in my opinion.

Cheers,

-R

EDIT: To respond to the comments on teaching C++. You can teach C++ in two ways - either teaching it as C "on steroids" (start with variables, conditions, loops, etc), or teaching it as a pure "OO" language (start with classes, methods, etc). You can find teaching texts that use one or other of these approaches. I prefer the latter approach (OO first) as it does emphasize the capabilities of C++ as an OO language (which was the original design emphasis of C++). If you want to teach C++ "as C", then I think you should teach C, not C++.

But the problem with C++ as a first language in my experience is that the language is simply too BIG to teach in one semester, plus most "intro" texts try and cover everything. It is simply not possible to cover all the topics in a "first language" course. You have to at least split it into 2 semesters, and then it's no longer "first language", IMO.

I do teach C++, but only as a "new language" - that is, you must be proficient in some prior "pure" language (not scripting or macros) before you can enroll in the course. C++ is a very fine "second language" to learn, IMO.

-R

'Nother Edit: (to Konrad)

I do not at all agree that C++ "is superior in every way" to C. I spent years coding C programs for microcontrollers and other embedded applications. The C compilers for these devices are highly optimized, often producing code as good as hand-coded assembler. When you move to C++, you gain a tremendous overhead imposed by the compiler in order to manage language features you may not use. In embedded applications, you gain little by adding classes and such, IMO. What you need is tight, clean code. You can write it in C++, but then you're really just writing C, and the C compilers are more optimized in these applications.

I wrote a MIDI engine, first in C, later in C++ (at the vendor's request) for an embedded controller (sound card). In the end, to meet the performance requirements (MIDI timings, etc) we had to revert to pure C for all of the core code. We were able to use C++ for the high-level code, and having classes was very sweet - but we needed C to get the performance at the lower level. The C code was an order of magnitude faster than the C++ code, but hand coded assembler was only slightly faster than the compiled C code. This was back in the early 1990s, just to place the events properly.

-R


Opinion: Never ever have different code between "debug" and "release" builds

The main reason being that release code almost never gets tested. Better to have the same code running in test as it is in the wild.


I believe the use of try/catch exception handling is worse than the use of simple return codes and associated common messaging structures to ferry useful error messages.

Littering code with try/catch blocks is not a solution.

Just passing exceptions up the stack hoping whats above you will do the right thing or generate an informative error is not a solution.

Thinking you have any chance of systematically verifying the proper exception handlers are avaliable to address anything that could go wrong in either transparent or opague objects is not realistic. (Think also in terms of late bindings/external libraries and unecessary dependancies between unrelated functions in a call stack as system evolves)

Use of return codes are simple, can be easily systematically verified for coverage and if handled properly forces developers to generate useful error messages rather than the all-too-common stack dumps and obscure I/O exceptions that are "exceptionally" meaningless to even the most clueful of end users.

--

My final objection is the use of garbage collected languages. Don't get me wrong.. I love them in some circumstances but in general for server/MC systems they have no place in my view.

GC is not infallable - even extremely well designed GC algorithms can hang on to objects too long or even forever based on non-obvious circular refrences in their dependancy graphs.

Non-GC systems following a few simple patterns and use of memory accounting tools don't have this problem but do require more work in design and test upfront than GC environments. The tradeoff here is that memory leaks are extremely easy to spot during testing in Non-GC while finding GC related problem conditions is a much more difficult proposition.

Memory is cheap but what happens when you leak expensive objects such as transaction handles, synchronization objects, socket connections...etc. In my environment the very thought that you can just sit back and let the language worry about this for you is unthinkable without significant fundental changes in software description.


Goto is OK! (is that controversial enough)
Sometimes... so give us the choice! For example, BASH doesn't have goto. Maybe there is some internal reason for this but still.
Also, goto is the building block of Assembly language. No if statements for you! :)


Ternary operators absolutely suck. They are the epitome of lazy ass programing.

user->isLoggedIn() ? user->update() : user->askLogin();

This is so easy to screw up. A little change in revision #2:

user->isLoggedIn() && user->isNotNew(time()) ? user->update() : user->askLogin();

Oh yeah, just one more "little change."

user->isLoggedIn() && user->isNotNew(time()) ? user->update() 
    : user->noCredentials() ? user->askSignup
        : user->askLogin();

Oh crap, what about that OTHER case?

user->isLoggedIn() && user->isNotNew(time()) && !user->isBanned() ? user->update() 
    : user->noCredentials() || !user->isBanned() ? user->askSignup()
        : user->askLogin();

NO NO NO NO. Just save us the code change. Stop being freaking lazy:

if (user->isLoggedIn()) {
    user->update()
} else {
    user->askLogin();
}

Because doing it right the first time will save us all from having to convert your crap ternaries AGAIN and AGAIN:

if (user->isLoggedIn() && user->isNotNew(time()) && !user->isBanned()) {
    user->update()
} else {
    if (user->noCredentials() || !user->isBanned()) {
        user->askSignup();
    } else {
        user->askLogin();
    }
}

It's fine if you don't know. But you're fired if you can't even google it.

Internet is a tool. It's not making you stupider if you're learning from it.


HTML 5 + JavaScript will be the most used UI programming platform of the future.Flash,Silverlight,Java Applets etc. etc. are all going to die a silent death


A degree in Computer Science or other IT area DOES make you a more well rounded programmer

I don't care how many years of experience you have, how many blogs you've read, how many open source projects you're involved in. A qualification (I'd recommend longer than 3 years) exposes you to a different way of thinking and gives you a great foundation.

Just because you've written some better code than a guy with a BSc in Computer Science, does not mean you are better than him. What you have he can pick up in an instant which is not the case the other way around.

Having a qualification shows your commitment, the fact that you would go above and beyond experience to make you a better developer. Developers which are good at what they do AND have a qualification can be very intimidating.

I would not be surprized if this answer gets voted down.

Also, once you have a qualification, you slowly stop comparing yourself to those with qualifications (my experience). You realize that it all doesn't matter at the end, as long as you can work well together.

Always act mercifully towards other developers, irrespective of qualifications.


I'd rather be truly skilled/experienced in an older technology that allows me to solve real world problems effectively, as opposed to new "fashionable" technologies that still going through the adolescent stage.


To be really controversial:

You know nothing!

or in other words:

I know that I know nothing.

(this could be paraphrased in many kinds but I think you get it.)

When starting with computers/developing, IMHO there are three stages everyone has to walk through:

The newbie: knows nothing (this is fact)

The intermediate: thinks he knows something/very much(/all) (this is conceit)

The professional: knows that he knows nothing (because as a programmer most time you have to work on things you have never done before). This is no bad thing: I love to familiarize myself to new things all the time.

I think as a programmer you have to know how to learn - or better: To learn to learn (because remember: You know nothing! ;)).


Remove classes. Number of classes (methods of classes) in .NET Framework handles exception implicitly. It's difficult to work with a dumb person.


Good Performance VS Elegant Design

They are not mutually exclusive but I can't stand over-designed class structures/frameworks that completely have no clue about performance. I don't need to have a string of new This(new That(new Whatever())); to create an object that will tell me it's 5 AM in the morning oh by the way, it's 217 days until Obama's birthday, and the weekend is 2 days away. I only wanted to know if the gym was open.

Having balance between the 2 are crucial. The code needs to get nasty when you need to pump out all the processor do something intensive such as reading terabytes of data. Save the elegance for the places that consume the 10% of resources which is probably more than 90% of the code.


I often get shouted down when I claim that the code is merely an expression of my design. I quite dislike the way I see so many developers design their system "on the fly" while coding it.

The amount of time and effort wasted when one of these cowboys falls off his horse is amazing - and 9 times out of 10 the problem they hit would have been uncovered with just a little upfront design work.

I feel that modern methodologies do not emphasize the importance of design in the overall software development process. Eg, the importance placed on code reviews when you haven't even reviewed your design! It's madness.


I've been burned for broadcasting these opinions in public before, but here goes:

Well-written code in dynamically typed languages follows static-typing conventions

Having used Python, PHP, Perl, and a few other dynamically typed languages, I find that well-written code in these languages follows static typing conventions, for example:

  • Its considered bad style to re-use a variable with different types (for example, its bad style to take a list variable and assign an int, then assign the variable a bool in the same method). Well-written code in dynamically typed languages doesn't mix types.

  • A type-error in a statically typed language is still a type-error in a dynamically typed language.

  • Functions are generally designed to operate on a single datatype at a time, so that a function which accepts a parameter of type T can only sensibly be used with objects of type T or subclasses of T.

  • Functions designed to operator on many different datatypes are written in a way that constrains parameters to a well-defined interface. In general terms, if two objects of types A and B perform a similar function, but aren't subclasses of one another, then they almost certainly implement the same interface.

While dynamically typed languages certainly provide more than one way to crack a nut, most well-written, idiomatic code in these languages pays close attention to types just as rigorously as code written in statically typed languages.

Dynamic typing does not reduce the amount of code programmers need to write

When I point out how peculiar it is that so many static-typing conventions cross over into dynamic typing world, I usually add "so why use dynamically typed languages to begin with?". The immediate response is something along the lines of being able to write more terse, expressive code, because dynamic typing allows programmers to omit type annotations and explicitly defined interfaces. However, I think the most popular statically typed languages, such as C#, Java, and Delphi, are bulky by design, not as a result of their type systems.

I like to use languages with a real type system like OCaml, which is not only statically typed, but its type inference and structural typing allow programmers to omit most type annotations and interface definitions.

The existence of the ML family of languages demostrates that we can enjoy the benefits of static typing with all the brevity of writing in a dynamically typed language. I actually use OCaml's REPL for ad hoc, throwaway scripts in exactly the same way everyone else uses Perl or Python as a scripting language.


It's ok to write garbage code once in a while

Sometimes a quick and dirty piece of garbage code is all that is needed to fulfill a particular task. Patterns, ORMs, SRP, whatever... Throw up a Console or Web App, write some inline sql ( feels good ), and blast out the requirement.


Good Performance VS Elegant Design

They are not mutually exclusive but I can't stand over-designed class structures/frameworks that completely have no clue about performance. I don't need to have a string of new This(new That(new Whatever())); to create an object that will tell me it's 5 AM in the morning oh by the way, it's 217 days until Obama's birthday, and the weekend is 2 days away. I only wanted to know if the gym was open.

Having balance between the 2 are crucial. The code needs to get nasty when you need to pump out all the processor do something intensive such as reading terabytes of data. Save the elegance for the places that consume the 10% of resources which is probably more than 90% of the code.


Macros, Preprocessor instructions and Annotations are evil.

One syntax and language per file please!

// does not apply to Make files, or editor macros that insert real code.


Don't use keywords for basic types if the language has the actual type exposed. In C#, this would refer to bool (Boolean), int (Int32), float (Single), long (Int64). 'int', 'bool', etc are not actual parts of the language, but rather just 'shortcuts' or 'aliases' for the actual type. Don't use something that doesn't exist! And in my opinion, Int16, Int32, Int64, Boolean, etc makes a heck of a lot more sense then 'short', 'long', 'int'.


Having a process that involves code being approved before it is merged onto the main line is a terrible idea. It breeds insecurity and laziness in developers, who, if they knew they could be screwing up dozens of people would be very careful about the changes they make, get lulled into a sense of not having to think about all the possible clients of the code they may be affecting. The person going over the code is less likely to have thought about it as much as the person writing it, so it can actually lead to poorer quality code being checked in... though, yes, it will probably follow all the style guidelines and be well commented :)


Goto is OK! (is that controversial enough)
Sometimes... so give us the choice! For example, BASH doesn't have goto. Maybe there is some internal reason for this but still.
Also, goto is the building block of Assembly language. No if statements for you! :)


Tcl/Tk is the best GUI language/toolkit combo ever

It may lack specific widgets and be less good-looking than the new kids on the block, but its model is elegant and so easy to use that one can build working GUIs faster by typing commands interactively than by using a visual interface builder. Its expressive power is unbeatable: other solutions (Gtk, Java, .NET, MFC...) typically require ten to one hundred LOC to get the same result as a Tcl/Tk one-liner. All without even sacrificing readability or stability.

pack [label .l -text "Hello world!"] [button .b -text "Quit" -command exit]

Programmers take their (own little limited stupid) programming language as a sacrosanct religion.

Its so funny how programmers take these discussions almost like religious believers do: no critics allowed, (often) no objective discussion, (very often) arguing based upon very limited or absent knowledge and information. For a confirmation, just read the previous answers, and especially the comments.

Also funny and another confirmation: by definition of the question "give me a controversial opinion", any controversion opinion should NOT qualify for negative votes - actually the opposite: the more controversial, the better. But how do our programmers react: like Pavlov's dogs, voting negative on disliked opinions.

PS: I upvoted some others for fairness.


It's not the tools, it's you

Whenever developers try to do something new like doing UML diagrams, charts of any sort, project management they first look for the perfect tool to solve the problem. After endless searches finding not the right tool their motivation starves. All that is left then is complaints about the lack of useable software. It is the insight that the plan to be organized died in absence of a piece of software.

Well, it is only yourself dealing with organization. If you are used to organize you can do it with or without the aid of a software (and most do without). If you are not used to organize nobody can help you.

So "not having the right software" is just the simplest excuse for not being organized at all.


It's okay to be Mort

Not everyone is a "rockstar" programmer; some of us do it because it's a good living, and we don't care about all the latest fads and trends; we just want to do our jobs.


Code Generation is bad

I hate languages that require you to make use of code generation (or copy&paste) for simple things, like JavaBeans with all their Getters and Setters.

C#'s AutoProperties are a step in the right direction, but for nice DTOs with Fields, Properties and Constructor parameters you still need a lot of redundancy.


That (at least during initial design), every Database Table (well, almost every one) should be clearly defined to contain some clearly understanable business entity or system-level domain abstraction, and that whether or not you use it as a a primary key and as Foreign Keys in other dependant tables, some column (attribute) or subset of the table attributes should be clearly defined to represent a unique key for that table (entity/abstraction). This is the only way to ensure that the overall table structure represents a logically consistent representation of the complete system data structure, without overlap or misunbderstood flattening. I am a firm believeer in using non-meaningful surrogate keys for Pks and Fks and join functionality, (for performance, ease of use, and other reasons), but I beleive the tendency in this direction has taken the database community too far away from the original Cobb principles, and we jhave lost much of the benefits (of database consistency) that natural keys provided.

So why not use both?


VB sucks
While not terribly controversial in general, when you work in a VB house it is


For a good programmer language is not a problem.

It may be not very controvertial but I hear a lot o whining from other programmers like "why don't they all use delphi?", "C# sucks", "i would change company if they forced me to use java" and so on.
What i think is that a good programmer is flexible and is able to write good programms in any programming language that he might have to learn in his life


Apparently mine is that Haskell has variables. This is both "trivial" (according to at least eight SO users) (though nobody can seem to agree on which trivial answer is correct), and a bad question even to ask (according to at least five downvoters and four who voted to close it). Oh, and I (and computing scientests and mathematicians) am wrong, though nobody can provide me a detailed explanation of why.


Don't use inheritance unless you can explain why you need it.


80% of bugs are introduced in the design stage.
The other 80% are introduced in the coding stage.

(This opinion was inspired by reading Dima Malenko's answer. "Development is 80% about the design and 20% about coding", yes. "This will produce code with near zero bugs", no.)


I hate universities and institutes offering short courses for teaching programming to new comers. It is outright disgrace and contempt for the art1 and science of programming.

They start teaching C, Java, VB (disgusting) to the people without good grasp on hardware and fundamental principals of computers. The should first be taught about the MACHINE by books like Morris Mano's Computer System Architecture and then taught the concept of instructing machine to solve problems instead of etching semantics and syntax of one programming language.

Also I don't understand government schools, colleges teaching children basics of computers using commercial operating systems and softwares. At least in my country (India) not many students afford to buy operating systems and even discounted office suits let alone the development software juggernaut (compilers, IDEs etc). This prompts theft and piracy and make this act of copying and stealing software from their institutes' libraries a justified act.

Again they are taught to use some products not the fundamental ideas.

Think about it if you were taught only that 2x2 is 4 and not the concept of multiplication?

Or if you were taught now to measure the length of pole inclined to some compound wall of your school but not the Pythagoras theorem



I don't know if it's really controversial, but how about this: Method and function names are the best kind of commentary your code can have; if you find yourself writing a comment, turn the the piece of of code you're commenting into a function/method.

Doing this has the pleasant side-effect of forcing you to decompose your program well, avoids having comments that can quickly become out of sync with reality, gives you something you can grep the codebase for, and leaves your code with a fresh lemon odour.


Objects Should Never Be In An Invalid State

Unfortunately, so many of the ORM framework mandate zero-arg constructors for all entity classes, using setters to populate the member variables. In those cases, it's very difficult to know which setters must be called in order to construct a valid object.

MyClass c = new MyClass(); // Object in invalid state. Doesn't have an ID.
c.setId(12345); // Now object is valid.

In my opinion, it should be impossible for an object to ever find itself in an invalid state, and the class's API should actively enforce its class invariants after every method call.

Constructors and mutator methods should atomically transition an object from one valid state to another. This is much better:

MyClass c = new MyClass(12345); // Object starts out valid. Stays valid.

As the consumer of some library, it's a huuuuuuge pain to keep track of whether all the right setters have been invoked before attempting to use an object, since the documentation usually provides no clues about the class's contract.


You must know how to type to be a programmer.

It's controversial among people who don't know how to type, but who insist that they can two-finger hunt-and-peck as fast as any typist, or that they don't really need to spend that much time typing, or that Intellisense relieves the need to type...

I've never met anyone who does know how to type, but insists that it doesn't make a difference.

See also: Programming's Dirtiest Little Secret


Preconditions for arguments to methods/functions should be part of the language rather than programmers checking it always.


"XML and HTML are the "assembly language" of the web. Why still hack it?

It seems fairly obvious that very few developers these days learn/code in assembly language for reason that it's primitive and takes you far away from the problem you have to solve at high-level. So we invented high-level languages to encapsulates those level entities to boost our productivity thru the language elements that we can relate to more at higher level. Just like we can do more with a computer than just its constituent motherboard or CPU.

With the Web, it seems to me developers still are reading/writing and hacking HTML,CSS,XMl,schemas, etc.

I see these as the equivalent of "assembly language" of the Web or its substrates. Should we be done with it?. Sure, we need to hack it sometimes when things go wrong. But surely, that's an exception. I assert that we are replacing lower-level assembly language at machine level with its equivalent at Web-level.


Relational databases are awful for web applications.

For example:

  • threaded comments
  • tag clouds
  • user search
  • maintaining record view counts
  • providing undo / revision tracking
  • multi-step wizards

Java is not the best thing out there. Just because it comes with an 'Enterprise' sticker does not make it good. Nor does it make it fast. Nor does it make it the answer to every question.

Also, ROR is not all it is cracked up to be by the Blogsphere.

While I am at it, OOP is not always good. In fact, I think it is usually bad.


UML diagrams are highly overrated

Of course there are useful diagrams e.g. class diagram for the Composite Pattern, but many UML diagrams have absolutely no value.


Boolean variables should be used only for Boolean logic. In all other cases, use enumerations.


Boolean variables are used to store data that can only take on two possible values. The problems that arise from using them are frequently overlooked:

  • Programmers often cannot correctly identify when some piece of data should only have two possible values
  • The people who instruct programmers what to do, such as program managers or whomever writes the specs that programmers follow, often cannot correctly identify this either
  • Even when a piece of data is correctly identified as having only two possible states, that guarantee may not hold in the future.

In these cases, using Boolean variables leads to confusing code that can often be prevented by using enumerations.

Example

Say a programmer is writing software for a car dealership that sells only cars and trucks. The programmer develops a thorough model of the business requirements for his software. Knowing that the only types of vehicles sold are cars and trucks, he correctly identifies that he can use a boolean variable inside a Vehicle class to indicate whether the vehicle is a car or a truck.

class Vehicle {
 bool isTruck;
 ...
}

The software is written so when isTruck is true a vehicle is a truck, and when isTruck is false the vehicle is a car. This is a simple check performed many times throughout the code.

Everything works without trouble, until one day when the car dealership buys another dealership that sells motorcycles as well. The programmer has to update the software so that it works correctly considering the dealership's business has changed. It now needs to identify whether a vehicle is a car, truck, or motorcycle, three possible states.

How should the programmer implement this? isTruck is a boolean variable, so it can hold only two states. He could change it from a boolean to some other type that allows many states, but this would break existing logic and possibly not be backwards compatible. The simplest solution from the programmer's point of view is to add a new variable to represent whether the vehicle is a motorcycle.

class Vehicle {
 bool isTruck;
 bool isMotorcycle;
 ...
}

The code is changed so that when isTruck is true a vehicle is a truck, when isMotorcycle is true a vehicle is a motorcycle, and when they're both false a vehicle is a car.

Problems

There are two big problems with this solution:

  • The programmer wants to express the type of the vehicle, which is one idea, but the solution uses two variables to do so. Someone unfamiliar with the code will have a harder time understanding the semantics of these variables than if the programmer had used just one variable that specifies the type entirely.
  • Solving this motorcycle problem by adding a new boolean doesn't make it any easier for the programmer to deal with such situations that happen in the future. If the dealership starts selling buses, the programmer will have to repeat all these steps over again by adding yet another boolean.

It's not the developer's fault that the business requirements of his software changed, requiring him to revise existing code. But using boolean variables in the first place made his code less flexible and harder to modify to satisfy unknown future requirements (less "future-proof"). When he implemented the changes in the quickest way, the code became harder to read. Using a boolean variable was ultimately a premature optimization.

Solution

Using an enumeration in the first place would have prevented these problems.

enum EVehicleType { Truck, Car }

class Vehicle {
 EVehicleType type;
 ...
}

To accommodate motorcycles in this case, all the programmer has to do is add Motorcycle to EVehicleType, and add new logic to handle the motorcycle cases. No new variables need to be added. Existing logic shouldn't be disrupted. And someone who's unfamiliar with the code can easily understand how the type of the vehicle is stored.

Cliff Notes

Don't use a type that can only ever store two different states unless you're absolutely certain two states will always be enough. Use an enumeration if there are any possible conditions in which more than two states will be required in the future, even if a boolean would satisfy existing requirements.


"XML and HTML are the "assembly language" of the web. Why still hack it?

It seems fairly obvious that very few developers these days learn/code in assembly language for reason that it's primitive and takes you far away from the problem you have to solve at high-level. So we invented high-level languages to encapsulates those level entities to boost our productivity thru the language elements that we can relate to more at higher level. Just like we can do more with a computer than just its constituent motherboard or CPU.

With the Web, it seems to me developers still are reading/writing and hacking HTML,CSS,XMl,schemas, etc.

I see these as the equivalent of "assembly language" of the Web or its substrates. Should we be done with it?. Sure, we need to hack it sometimes when things go wrong. But surely, that's an exception. I assert that we are replacing lower-level assembly language at machine level with its equivalent at Web-level.


As there are hundreds of answers to this mine will probably end up unread, but here's my pet peeve anyway.

If you're a programmer then you're most likely awful at Web Design/Development

This website is a phenomenal resource for programmers, but an absolutely awful place to come if you're looking for XHTML/CSS help. Even the good Web Developers here are handing out links to resources that were good in the 90's!

Sure, XHTML and CSS are simple to learn. However, you're not just learning a language! You're learning how to use it well, and very few designers and developers can do that, let alone programmers. It took me ages to become a capable designer and even longer to become a good developer. I could code in HTML from the age of 10 but that didn't mean I was good. Now I am a capable designer in programs like Photoshop and Illustrator, I am perfectly able to write a good website in Notepad and am able to write basic scripts in several languages. Not only that but I have a good nose for Search Engine Optimisation techniques and can easily tell you where the majority of people are going wrong (hint: get some good content!).

Also, this place is a terrible resource for advice on web standards. You should NOT just write code to work in the different browsers. You should ALWAYS follow the standard to future-proof your code. More often than not the fixes you use on your websites will break when the next browser update comes along. Not only that but the good browsers follow standards anyway. Finally, the reason IE was allowed to ruin the Internet was because YOU allowed it by coding your websites for IE! If you're going to continue to do that for Firefox then we'll lose out yet again!

If you think that table-based layouts are as good, if not better than CSS layouts then you should not be allowed to talk on the subject, at least without me shooting you down first. Also, if you think W3Schools is the best resource to send someone to then you're just plain wrong.

If you're new to Web Design/Development don't bother with this place (it's full of programmers, not web developers). Go to a good Web Design/Development community like SitePoint.


Avoid indentation.

Use early returns, continues or breaks.

instead of:

if (passed != NULL)
{
   for(x in list)
   {
      if (peter)
      {
          print "peter";
          more code.
          ..
          ..
      }
      else
      {
          print "no peter?!"
      }
   }
}

do:

if (pPassed==NULL)
    return false;

for(x in list)
{
   if (!peter)
   {
       print "no peter?!"
       continue;
   }

   print "peter";
   more code.
   ..
   ..
}

PHP sucks ;-)

The proof is in the pudding.


There is no difference between software developer, coder, programmer, architect ...

I've been in the industry for more than 10 yeast and still find it absolutely idiotic to try to distinguish between these "roles". You write code? You're a developer. You are spending all day drawing fancy UML diagrams. You're a ... well.. I have no idea what you are, you're probably just trying to impress somebody. (Yes, I know UML).


Using Stored Procedures

Unless you are writing a large procedural function composed of non-reusable SQL queries, please move your stored procedures of the database and into version control.


Enable multiple checkout If we improve enough discipline of the developers, we will get much more efficiency from this setting by auto merge of source control.


MIcrosoft is not as bad as many say they are.


Don't use stored procs in your database.

The reasons they were originally good - security, abstraction, single connection - can all be done in your middle tier with ORMs that integrate lots of other advantages.

This one is definitely controversial. Every time I bring it up, people tear me apart.


Although I'm in full favor of Test-Driven Development (TDD), I think there's a vital step before developers even start the full development cycle of prototyping a solution to the problem.

We too often get caught up trying to follow our TDD practices for a solution that may be misdirected because we don't know the domain well enough. Simple prototypes can often elucidate these problems.

Prototypes are great because you can quickly churn through and throw away more code than when you're writing tests first (sometimes). You can then begin the development process with a blank slate but a better understanding.


To Be A Good Programmer really requires working in multiple aspects of the field: Application development, Systems (Kernel) work, User Interface Design, Database, and so on. There are certain approaches common to all, and certain approaches that are specific to one aspect of the job. You need to learn how to program Java like a Java coder, not like a C++ coder and vice versa. User Interface design is really hard, and uses a different part of your brain than coding, but implementing that UI in code is yet another skill as well. It is not just that there is no "one" approach to coding, but there is not just one type of coding.


There's an awful lot of bad teaching out there.

We developers like to feel smugly superior when Joel says there's a part of the brain for understanding pointers that some people are just born without. The topics many of us discuss here and are passionate about are esoteric, but sometimes that's only because we make them so.


Most Programmers are Useless at Programming

(You did say 'controversial')

I was sat in my office at home pondering some programming problem and I ended up looking at my copy of 'Complete Spectrum ROM Disassembly' on my bookshelf and thinking:

"How many programmers today could write the code used in the Spectrum's ROM?"

The Spectrum, for those unfamiliar with it, had a Basic programming language that could do simple 2D graphics (lines, curves), file IO of a sort and floating point calculations including transendental functions all in 16K of Z80 code (a < 5Mhz 8bit processor that had no FPU or integer multiply). Most graduates today would have trouble writing a 'Hello World' program that was that small.

I think the problem is that the absolute number of programmers that could do that has hardly changed but as a percentage it is quickly approaching zero. Which means that the quality of code being written is decreasing as more sub-par programmers enter the field.

Where I'm currently working, there are seven programmers including myself. Of these, I'm the only one who keeps up-to-date by reading blogs, books, this site, etc and doing programming 'for fun' at home (my wife is constantly amazed by this). There's one other programmer who is keen to write well structured code (interestingly, he did a lot of work using Delphi) and to refactor poor code. The rest are, well, not great. Thnking about it, you could describe them as 'brute force' programmers - will force inappropriate solutions until they work after a fashion (e.g. using C# arrays with repeated array.Resize to dynamically add items instead of using a List).

Now, I don't know if the place I'm currently at is typical, although from my previous positions I would say it is. With the benefit of hindsight, I can see common patterns that certainly didn't help any of the projects (lack of peer review of code for one).

So, 5 out of 7 programmers are rubbish.

Skizz


Not all programmers are created equal

Quite often managers think that DeveloperA == DeveloperB simply because they have same level of experience and so on. In actual fact, the performance of one developer can be 10x or even 100x that of another.

It's politically risky to talk about it, but sometimes I feel like pointing out that, even though several team members may appear to be of equal skill, it's not always the case. I have even seen cases where lead developers were 'beyond hope' and junior devs did all the actual work - I made sure they got the credit, though. :)


Copy/Pasting is not an antipattern, it fact it helps with not making more bugs

My rule of thumb - typing only something that cannot be copy/pasted. If creating similar method, class, or file - copy existing one and change what's needed. (I am not talking about duplicating a code that should have been put into a single method).

I usually never even type variable names - either copy pasting them or using IDE autocompletion. If need some DAO method - copying similar one and changing what's needed (even if 90% will be changed). May look like extreme laziness or lack of knowledge to some, but I almost never have to deal with problems caused my misspelling something trivial, and they are usually tough to catch (if not detected on a compile level).

Whenever I step away from my copy-pasting rule and start typing stuff I always misspelling something (it's just a statistics, nobody can write perfect text off the bat) and then spending more time trying to figure out where.


The worst thing about recursion is recursion.


Code layout does matter

Maybe specifics of brace position should remain purely religious arguments - but it doesn't mean that all layout styles are equal, or that there are no objective factors at all!

The trouble is that the uber-rule for layout, namely: "be consistent", sound as it is, is used as a crutch by many to never try to see if their default style can be improved on - and that, furthermore, it doesn't even matter.

A few years ago I was studying Speed Reading techniques, and some of the things I learned about how the eye takes in information in "fixations", can most optimally scan pages, and the role of subconsciously picking up context, got me thinking about how this applied to code - and writing code with it in mind especially.

It led me to a style that tended to be columnar in nature, with identifiers logically grouped and aligned where possible (in particular I became strict about having each method argument on its own line). However, rather than long columns of unchanging structure it's actually beneficial to vary the structure in blocks so that you end up with rectangular islands that the eye can take in in a single fixture - even if you don't consciously read every character.

The net result is that, once you get used to it (which typically takes 1-3 days) it becomes pleasing to the eye, easier and faster to comprehend, and is less taxing on the eyes and brain because it's laid out in a way that makes it easier to take in.

Almost without exception, everyone I have asked to try this style (including myself) initially said, "ugh I hate it!", but after a day or two said, "I love it - I'm finding it hard not to go back and rewrite all my old stuff this way!".

I've been hoping to find the time to do more controlled experiments to collect together enough evidence to write a paper on, but as ever have been too busy with other things. However this seemed like a good opportunity to mention it to people interested in controversial techniques :-)

[Edit]

I finally got around to blogging about this (after many years parked in the "meaning to" phase): Part one, Part two, Part three.


C++ is a good language

I practically got lynched in another question a week or two back for saying that C++ wasn't a very nice language. So now I'll try saying the opposite. ;)

No, seriously, the point I tried to make then, and will try again now, is that C++ has plenty of flaws. It's hard to deny that. It's so extremely complicated that learning it well is practically something you can dedicate your entire life to. It makes many common tasks needlessly hard, allows the user to plunge head-first into a sea of undefined behavior and unportable code, with no warnings given by the compiler.

But it's not the useless, decrepit, obsolete, hated language that many people try to make it. It shouldn't be swept under the carpet and ignored. The world wouldn't be a better place without it. It has some unique strengths that, unfortunately, are hidden behind quirky syntax, legacy cruft and not least, bad C++ teachers. But they're there.

C++ has many features that I desperately miss when programming in C# or other "modern" languages. There's a lot in it that C# and other modern languages could learn from.

It's not blindly focused on OOP, but has instead explored and pioneered generic programming. It allows surprisingly expressive compile-time metaprogramming producing extremely efficient, robust and clean code. It took in lessons from functional programming almost a decade before C# got LINQ or lambda expressions.

It allows you to catch a surprising number of errors at compile-time through static assertions and other metaprogramming tricks, which eases debugging vastly, and even beats unit tests in some ways. (I'd much rather catch an error at compile-time than afterwards, when I'm running my tests).

Deterministic destruction of variables allows RAII, an extremely powerful little trick that makes try/finally blocks and C#'s using blocks redundant.

And while some people accuse it of being "design by committee", I'd say yes, it is, and that's actually not a bad thing in this case. Look at Java's class library. How many classes have been deprecated again? How many should not be used? How many duplicate each others' functionality? How many are badly designed?

C++'s standard library is much smaller, but on the whole, it's remarkably well designed, and except for one or two minor warts (vector<bool>, for example), its design still holds up very well. When a feature is added to C++ or its standard library, it is subjected to heavy scrutiny. Couldn't Java have benefited from the same? .NET too, although it's younger and was somewhat better designed to begin with, is still accumulating a good handful of classes that are out of sync with reality, or were badly designed to begin with.

C++ has plenty of strengths that no other language can match. It's a good language


I really dislike when people tell me to use getters and setters instead of making the variable public when you should be able to both get and set the class variable.

I totally agree on it if it's to change a variable in an object in your object, so you don't get things like: a.b.c.d.e = something; but I would rather use: a.x = something; then a.setX(something); I think a.x = something; actually are both easier to read, and prettier then set/get in the same example.

I don't see the reason by making:

void setX(T x) { this->x = x; }

T getX() { return x; }

which is more code, more time when you do it over and over again, and just makes the code harder to read.


"Comments are Lies"

Comments don't run and are easily neglected. It's better to express the intention with clear, refactored code illustrated by unit tests. (Unit tests written TDD of course...)

We don't write comments because they're verbose and obscure what's really going on in the code. If you feel the need to comment - find out what's not clear in the code and refactor/write clearer tests until there's no need for the comment...

... something I learned from Extreme Programming (assumes of course that you have established team norms for cleaning the code...)


Source files are SO 20th century.

Within the body of a function/method, it makes sense to represent procedural logic as linear text. Even when the logic is not strictly linear, we have good programming constructs (loops, if statements, etc) that allow us to cleanly represent non-linear operations using linear text.

But there is no reason that I should be required to divide my classes among distinct files or sort my functions/methods/fields/properties/etc in a particular order within those files. Why can't we just throw all those things within a big database file and let the IDE take care of sorting everything dynamically? If I want to sort my members by name then I'll click the member header on the members table. If I want to sort them by accessibility then I'll click the accessibility header. If I want to view my classes as an inheritence tree, then I'll click the button to do that.

Perhaps classes and members could be viewed spatially, as if they were some sort of entities within a virtual world. If the programmer desired, the IDE could automatically position classes & members that use each other near each other so that they're easy to find. Imaging being able to zoom in and out of this virtual world. Zoom all the way out and you can namespace galaxies with little class planets in them. Zoom in to a namespace and you can see class planets with method continents and islands and inner classes as orbitting moons. Zoom in to a method, and you see... the source code for that method.

Basically, my point is that in modern languages it doesn't matter what file(s) you put your classes in or in what order you define a class's members, so why are we still forced to use these archaic practices? Remember when Gmail came out and Google said "search, don't sort"? Well, why can't the same philosophy be applied to programming languages?


Exceptions considered harmful.


Here's one which has seemed obvious to me for many years but is anathema to everyone else: it is almost always a mistake to switch off C (or C++) assertions with NDEBUG in 'release' builds. (The sole exceptions are where the time or space penalty is unacceptable).

Rationale: If an assertion fails, your program has entered a state which

  • has never been tested
  • the developer was unable to code a recovery strategy for
  • the developer has effectively documented as being inconceivable.

Yet somehow 'industry best practice' is that the thing should just muddle on and hope for the best when it comes to live runs with your customers' data.


XML is highly overrated

I think too many jump onto the XML bandwagon before using their brains... XML for web stuff is great, as it's designed for it. Otherwise I think some problem definition and design thoughts should preempt any decision to use it.

My 5 cents


BAD IDE's make the programming language weak

Good programming IDEs really make working with certain languages easier and better to oversee. I have been bit spoiled in my professional carreer, the companies I worked for always had the latest Visual Studio's ready to use.

For about 8 months, I have been doing a lot of Cocoa next to my work and the Xcode editor makes working with that language just way too difficult. Overloads are difficult to find and the overal way of handling open files just makes your screen really messy, really fast. It's really a shame, because Cocoa is a cool and powerful language to work with.

Ofcourse die-hard Xcode fans will now vote down my post, but there are so many IDEs that are really a lot better.

People making a switch to IT, who just shouldn't

This is a copy/paste from a blog post of mine, made last year.


The experiences I have are mainly about the dutch market, but they also might apply to any other market.

We (as I group all Software Engineers together) are currently in a market that might look very good for us. Companies are desperately trying to get Software Engineers (from now on SE) , no matter the price. If you switch jobs now, you can demand almost anything you want. In the Netherlands there is a trend now to even give 2 lease cars with a job, just to get you to work for them. How weird is that? How am I gonna drive 2 cars at the same time??

Of course this sounds very good for us, but this also creates a very unhealthy situation..

For example: If you are currently working for a company which is growing fast and you are trying to attract more co-workers, to finally get some serious software development from the ground, there is no-one to be found without offering sky high salaries. Trying to find quality co-workers is very hard. A lot of people are attracted to our kind of work, because of the good salaries, but this also means that a lot of people without the right passion are entering our market.

Passion, yes, I think that is the right word. When you have passion for your job, your job won’t stop at 05:00 PM. You will keep refreshing all of your development RSS feeds all night. You will search the internet for the latest technologies that might be interesting to use at work. And you will start about a dozen new ‘promising’ projects a month, just to see if you can master that latest technology you just read about a couple of weeks ago (and find an useful way of actually using that technology).

Without that passion, the market might look very nice (because of the cars, money and of course the hot girls we attract), but I don’t think it will be that interesting very long as, let’s say: fireman or fighter-pilot.

It might sound that I am trying to protect my own job here and partly that is true. But I am also trying to protect myself against the people I don’t want to work with. I want to have heated discussions about stuff I read about. I want to be able to spar with people that have the same ‘passion’ for the job as I have. I want colleagues that are working with me for the right reasons.

Where are those people I am looking for!!


(Unnamed) tuples are evil

  • If you're using tuples as a container for several objects with unique meanings, use a class instead.
  • If you're using them to hold several objects that should be accessible by index, use a list.
  • If you're using them to return multiple values from a method, use Out parameters instead (this does require that your language supports pass-by-reference)

  • If it's part of a code obfuscation strategy, keep using them!

I see people using tuples just because they're too lazy to bother giving NAMES to their objects. Users of the API are then forced to access items in the tuple based on a meaningless index instead of a useful name.


Most consulting programmers suck and should not be allowed to write production code.

IMHO-Probably about 60% or more


USE of Desgin patterns and documentation

in web devlopment whats use of these things never felt any use of it


Arrays should by default be 1-based rather than 0-based. This is not necessarily the case with system implementation languages, but languages like Java swallowed more C oddities than they should have. "Element 1" should be the first element, not the second, to avoid confusion.

Computer science is not software development. You wouldn't hire an engineer who studied only physics, after all.

Learn as much mathematics as is feasible. You won't use most of it, but you need to be able to think that way to be good at software.

The single best programming language yet standardized is Common Lisp, even if it is verbose and has zero-based arrays. That comes largely from being designed as a way to write computations, rather than as an abstraction of a von Neumann machine.

At least 90% of all comparative criticism of programming languages can be reduced to "Language A has feature C, and I don't know how to do C or something equivalent in Language B, so Language A is better."

"Best practices" is the most impressive way to spell "mediocrity" I've ever seen.


USE of Desgin patterns and documentation

in web devlopment whats use of these things never felt any use of it


I believe the use of try/catch exception handling is worse than the use of simple return codes and associated common messaging structures to ferry useful error messages.

Littering code with try/catch blocks is not a solution.

Just passing exceptions up the stack hoping whats above you will do the right thing or generate an informative error is not a solution.

Thinking you have any chance of systematically verifying the proper exception handlers are avaliable to address anything that could go wrong in either transparent or opague objects is not realistic. (Think also in terms of late bindings/external libraries and unecessary dependancies between unrelated functions in a call stack as system evolves)

Use of return codes are simple, can be easily systematically verified for coverage and if handled properly forces developers to generate useful error messages rather than the all-too-common stack dumps and obscure I/O exceptions that are "exceptionally" meaningless to even the most clueful of end users.

--

My final objection is the use of garbage collected languages. Don't get me wrong.. I love them in some circumstances but in general for server/MC systems they have no place in my view.

GC is not infallable - even extremely well designed GC algorithms can hang on to objects too long or even forever based on non-obvious circular refrences in their dependancy graphs.

Non-GC systems following a few simple patterns and use of memory accounting tools don't have this problem but do require more work in design and test upfront than GC environments. The tradeoff here is that memory leaks are extremely easy to spot during testing in Non-GC while finding GC related problem conditions is a much more difficult proposition.

Memory is cheap but what happens when you leak expensive objects such as transaction handles, synchronization objects, socket connections...etc. In my environment the very thought that you can just sit back and let the language worry about this for you is unthinkable without significant fundental changes in software description.


Goto is OK! (is that controversial enough)
Sometimes... so give us the choice! For example, BASH doesn't have goto. Maybe there is some internal reason for this but still.
Also, goto is the building block of Assembly language. No if statements for you! :)


QA should know the code (indirectly) better than development. QA gets paid to find things development didn't intend to happen, and they often do. :) (Btw, I'm a developer who just values good QA guys a whole bunch -- far to few of them... far to few).


That the Law of Demeter, considered in context of aggregation and composition, is an anti-pattern.


Hardcoding is good!

Really ,more efficient and much easier to maintain in many cases!

The number of times I've seen constants put into parameter files really how often will you change the freezing point of water or the speed of light?

For C programs just hard code these type of values into a header file, for java into a static class etc.

When these parameters have a drastic effect on your programs behaviour you really want to do a regresion test on every change, this seems more natural with hard coded values. When things are stored in parameter/property files the temptation is to think "this is not a program cahnge so I dont need to test it".

The other advantage is it stops people messing with vital values in the parameter/property files because there aren't any!


I think its fine to use goto-statements, if you use them in a sane way (and a sane programming language). They can often make your code a lot easier to read and don't force you to use some twisted logic just to get one simple thing done.


Assembly is the best first programming language.


Extension Methods are the work of the Devil

Everyone seems to think that extension methods in .Net are the best thing since sliced bread. The number of developers singing their praises seems to rise by the minute but I'm afraid I can't help but despise them and unless someone can come up with a brilliant justification or example that I haven't already heard then I will never write one. I recently came across this thread and I must say reading the examples of the highest voted extensions made me feel a little like vomiting (metaphorically of course).

The main reasons given for their extensiony goodness are increased readability, improved OO-ness and the ability to chain method calls better.

I'm afraid I have to differ, I find in fact that they, unequivocally, reduce readability and OO-ness by virtue of the fact that they are at their core a lie. If you need a utility method that acts upon an object then write a utility method that acts on that object don't lie to me. When I see aString.SortMeBackwardsUsingKlingonSortOrder then string should have that method because that is telling me something about the string object not something about the AnnoyingNerdReferences.StringUtilities class.

LINQ was designed in such a way that chained method calls are necessary to avoid strange and uncomfortable expressions and the extension methods that arise from LINQ are understandable but in general chained method calls reduce readability and lead to code of the sort we see in obfuscated Perl contests.

So, in short, extension methods are evil. Cast off the chains of Satan and commit yourself to extension free code.


If you need to read the manual, the software isn't good enough.

Plain and simple :-)


Bad Programmers are Language-Agnostic

A really bad programmer can write bad code in almost any language.


Sometimes it's appropriate to swallow an exception.

For UI bells and wistles, prompting the user with an error message is interuptive, and there is ussually nothing for them to do anyway. In this case, I just log it, and deal with it when it shows up in the logs.


"Java Sucks" - yeah, I know that opinion is definitely not held by all :)

I have that opinion because the majority of Java applications I've seen are memory hogs, run slowly, horrible user interface and so on.

G-Man


Architects that do not code are useless.

That sounds a little harsh, but it's not unreasonable. If you are the "architect" for a system, but do not have some amount of hands-on involvement with the technologies employed then how do you get the respect of the development team? How do you influence direction?

Architects need to do a lot more (meet with stakeholders, negotiate with other teams, evaluate vendors, write documentation, give presentations, etc.) But, if you never see code checked in from by your architect... be wary!


Non-development staff should not be allowed to manage development staff.

Correction: Staff with zero development experience should not be allowed to manage development staff.


Never implement anything as a singleton.

You can decide not to construct more than one instance, but always ensure you implementation can handle more.

I have yet to find any scenario where using a singleton is actually the right thing to do.

I got into some very heated discussions over this in the last few years, but in the end I was always right.


The class library guidelines for implementing IDisposable are wrong.

I don't share this too often, but I believe that the guidance for the default implementation for IDisposable is completely wrong.

My issue isn't with the overload of Dispose and then removing the item from finalization, but rather, I despise how there is a call to release the managed resources in the finalizer. I personally believe that an exception should be thrown (and yes, with all the nastiness that comes from throwing it on the finalizer thread).

The reasoning behind it is that if you are a client or server of IDisposable, there is an understanding that you can't simply leave the object lying around to be finalized. If you do, this is a design/implementation flaw (depending on how it is left lying around and/or how it is exposed), as you are not aware of the lifetime of instances that you should be aware of.

I think that this type of bug/error is on the level of race conditions/synchronization to resources. Unfortunately, with calling the overload of Dispose, that error is never materialized.

Edit: I've written a blog post on the subject if anyone is interested:

http://www.caspershouse.com/post/A-Better-Implementation-Pattern-for-IDisposable.aspx


That (at least during initial design), every Database Table (well, almost every one) should be clearly defined to contain some clearly understanable business entity or system-level domain abstraction, and that whether or not you use it as a a primary key and as Foreign Keys in other dependant tables, some column (attribute) or subset of the table attributes should be clearly defined to represent a unique key for that table (entity/abstraction). This is the only way to ensure that the overall table structure represents a logically consistent representation of the complete system data structure, without overlap or misunbderstood flattening. I am a firm believeer in using non-meaningful surrogate keys for Pks and Fks and join functionality, (for performance, ease of use, and other reasons), but I beleive the tendency in this direction has taken the database community too far away from the original Cobb principles, and we jhave lost much of the benefits (of database consistency) that natural keys provided.

So why not use both?


Opinion: Data driven design puts the cart before the horse. It should be eliminated from our thinking forthwith.

The vast majority of software isn't about the data, it's about the business problem we're trying to solve for our customers. It's about a problem domain, which involves objects, rules, flows, cases, and relationships.

When we start our design with the data, and model the rest of the system after the data and the relationships between the data (tables, foreign keys, and x-to-x relationships), we constrain the entire application to how the data is stored in and retrieved from the database. Further, we expose the database architecture to the software.

The database schema is an implementation detail. We should be free to change it without having to significantly alter the design of our software at all. The business layer should never have to know how the tables are set up, or if it's pulling from a view or a table, or getting the table from dynamic SQL or a stored procedure. And that type of code should never appear in the presentation layer.

Software is about solving business problems. We deal with users, cars, accounts, balances, averages, summaries, transfers, animals, messsages, packages, carts, orders, and all sorts of other real tangible objects, and the actions we can perform on them. We need to save, load, update, find, and delete those items as needed. Sometimes, we have to do those things in special ways.

But there's no real compelling reason that we should take the work that should be done in the database and move it away from the data and put it in the source code, potentially on a separate machine (introducing network traffic and degrading performance). Doing so means turning our backs on the decades of work that has already been done to improve the performance of stored procedures and functions built into databases. The argument that stored procedures introduce "yet another API" to be manged is specious: of course it does; that API is a facade that shields you from the database schema, including the intricate details of primary and foreign keys, transactions, cursors, and so on, and it prevents you from having to splice SQL together in your source code.

Put the horse back in front of the cart. Think about the problem domain, and design the solution around it. Then, derive the data from the problem domain.


This one is mostly web related but...

Use Tables for your web page layouts

If I was developing a gigantic site that needed to squeeze performance I might think about it, but nothing gives me an easier way to get a consistent look out on the browser than tables. The majority of applications that I develop are for around 100-1000 users and possible 100 at a time max. The extra bloat of the tables aren't killing my server by any means.


There's an awful lot of bad teaching out there.

We developers like to feel smugly superior when Joel says there's a part of the brain for understanding pointers that some people are just born without. The topics many of us discuss here and are passionate about are esoteric, but sometimes that's only because we make them so.


coding is not typing

It takes time to write the code. Most of the time in the editor window, you are just looking at the code, not actually typing. Not as often, but quite frequently, you are deleting what you have written. Or moving from one place to another. Or renaming.

If you are banging away at the keyboard for a long time you are doing something wrong.

Corollary: Number of lines of code written per day is not a linear measurement of a programmers productivity, as programmer that writes 100 lines in a day is quite likely a better programmer then the one that writes 20, but one that writes 5000 is certainly a bad programmer


I firmly believe that unmanaged code isn't worth the trouble. The extra maintainability expenses associated with hunting down memory leaks which even the best programmers introduce occasionally far outweigh the performance to be gained from a language like C++. If Java, C#, etc. can't get the performance you need, buy more machines.


Most of programming job interview questions are pointless. Especially those figured out by programmers.

It is a common case, at least according to my & my friends experience, where a puffed up programmer, asks you some tricky wtf he spent weeks googling for. The funny thing about that is, you get home and google it within a minute. It's like they often try to beat you up with their sophisticated weapons, instead of checking if you'd be a comprehensive, pragmatic team player to work with.

Similar stupidity IMO is when you're being asked for highly accessible fundamentals, like: "Oh wait, let me see if you can pseudo-code that insert_name_here-algorithm on a sheet of paper (sic!)". Do I really need to remember it while applying for a high-level programming job? Should I efficiently solve problems or puzzles?


There's an awful lot of bad teaching out there.

We developers like to feel smugly superior when Joel says there's a part of the brain for understanding pointers that some people are just born without. The topics many of us discuss here and are passionate about are esoteric, but sometimes that's only because we make them so.


Never implement anything as a singleton.

You can decide not to construct more than one instance, but always ensure you implementation can handle more.

I have yet to find any scenario where using a singleton is actually the right thing to do.

I got into some very heated discussions over this in the last few years, but in the end I was always right.


small code is always better, but then complex ?: instead of if-else made me realize that sometime large code is more readable.


I fail to understand why people think that Java is absolutely the best "first" programming language to be taught in universities.

For one, I believe that first programming language should be such that it highlights the need to learn control flow and variables, not objects and syntax

For another, I believe that people who have not had experience in debugging memory leaks in C / C++ cannot fully appreciate what Java brings to the table.

Also the natural progression should be from "how can I do this" to "how can I find the library which does that" and not the other way round.


Don't write code, remove code!

As a smart teacher once told me: "Don't write code, Writing code is bad, Removing code is good. and if you have to write code - write small code..."


Unit Testing won't help you write good code

The only reason to have Unit tests is to make sure that code that already works doesn't break. Writing tests first, or writing code to the tests is ridiculous. If you write to the tests before the code, you won't even know what the edge cases are. You could have code that passes the tests but still fails in unforeseen circumstances.

And furthermore, good developers will keep cohesion low, which will make the addition of new code unlikely to cause problems with existing stuff.

In fact, I'll generalize that even further,

Most "Best Practices" in Software Engineering are there to keep bad programmers from doing too much damage.

They're there to hand-hold bad developers and keep them from making dumbass mistakes. Of course, since most developers are bad, this is a good thing, but good developers should get a pass.


Haven't tested it yet for controversy, but there may be potential:

The best line of code is the one you never wrote.


small code is always better, but then complex ?: instead of if-else made me realize that sometime large code is more readable.


If it's not native, it's not really programming

By definition, a program is an entity that is run by the computer. It talks directly to the CPU and the OS. Code that does not talk directly to the CPU and the OS, but is instead run by some other program that does talk directly to the CPU and the OS, is not a program; it's a script.

This was just simple common sense, completely non-controversial, back before Java came out. Suddenly there was a scripting language with a large enough feature set to accomplish tasks that had previously been exclusively the domain of programs. In response, Microsoft developed the .NET framework and some scripting languages to run on it, and managed to muddy the waters further by slowly reducing support for true programming among their development tools in favor of .NET scripting.

Even though it can accomplish a lot of things that you previously had to write programs for, managed code of any variety is still scripting, not programming, and "programs" written in it do and always will share the performance characteristics of scripts: they run more slowly and use up far more RAM than a real (native) program would take to accomplish the same task.

People calling it programming are doing everyone a disservice by dumbing down the definition. It leads to lower quality across the board. If you try and make programming so easy that any idiot can do it, what you end up with are a whole lot of idiots who think they can program.


The best code is often the code you don't write. As programmers we want to solve every problem by writing some cool method. Anytime we can solve a problem and still give the users 80% of what they want without introducing more code to maintain and test we have provided waaaay more value.


90 percent of programmers are pretty damn bad programmers, and virtually all of us have absolutely no tools to evaluate our current ability level (although we can generally look back and realize how bad we USED to suck)

I wasn't going to post this because it pisses everyone off and I'm not really trying for a negative score or anything, but:

A) isn't that the point of the question, and

B) Most of the "Answers" in this thread prove this point

I heard a great analogy the other day: Programming abilities vary AT LEAST as much as sports abilities. How many of us could jump into a professional team and actually improve their chances?


VB 6 could be used for good as well as evil. It was a Rapid Application Development environment in a time of over complicated coding.

I have hated VB vehemently in the past, and still mock VB.NET (probably in jest) as a Fisher Price language due to my dislike of classical VB, but in its day, nothing could beat it for getting the job done.


Inheritance is evil and should be deprecated.

The truth is aggregation is better in all cases. Static typed OOP languages can't avoid inheritance, it's the only way to describe what method wants from a type. But dynamic languages and duck typing can live without it. Ruby mixins is much more powerful then inheritance and a lot more controllable.


Use type inference anywhere and everywhere possible.

Edit:

Here is a link to a blog entry I wrote several months ago about why I feel this way.

http://blogs.msdn.com/jaredpar/archive/2008/09/09/when-to-use-type-inference.aspx


I fail to understand why people think that Java is absolutely the best "first" programming language to be taught in universities.

For one, I believe that first programming language should be such that it highlights the need to learn control flow and variables, not objects and syntax

For another, I believe that people who have not had experience in debugging memory leaks in C / C++ cannot fully appreciate what Java brings to the table.

Also the natural progression should be from "how can I do this" to "how can I find the library which does that" and not the other way round.



Opinion: Duration in the development field does not always mean the same as experience.

Many trades look at "years of experience" in a language. Yes, 5 years of C# can make sense since you may learn new tricks and what not. However, if you are with the company and maintaining the same code base for a number of years, I feel as if you are not gaining the amount of exposure to different situations as a person who works on different situations and client needs.

I once interviewed a person who prided himself on having 10 years of programming experience and worked with VB5, 6, and VB.Net...all in the same company during that time. After more probing, I found out that while he worked with all of those versions of VB, he was only upgrading and constantly maintaining his original VB5 app. Never modified the architecture and let the upgrade wizards do their thing. I have interviewed people who only have 2 years in the field but have worked on multiple projects that have more "experience" than him.


I don't care how powerful a programming language is if its syntax is not intuitive and I can't set it aside for some period of time and come back to it without too much effort at refreshing on the details. I would rather a language itself be intuitive than it be cryptic but powerful for creating DSL's. A computer language is a user interface for ME, and I want it designed for intuitive ease of use like any other user interface.


Believe it or not, my belief that, in an OO language, most of the (business logic) code that operates on a class's data should be in the class itself is heresy on my team.


I think that using regions in C# is totally acceptable to collapse your code while in VS. Too many people try to say it hides your code and makes it hard to find things. But if you use them properly they can be very helpful to identify sections of code.


Stay away from Celko!!!!

http://www.dbdebunk.com/page/page/857309.htm

I think it makes a lot more sense to use surrogate primary keys then "natural" primary keys.


@ocdecio: Fabian Pascal gives (in chapter 3 of his book Practical issues in database management, cited in point 3 at the page that you link) as one of the criteria for choosing a key that of stability (it always exists and doesn't change). When a natural key does not possesses such property, than a surrogate key must be used, for evident reasons, to which you hint in comments.

You don't know what he wrote and you have not bothered to check, otherwise you could discover that you actually agree with him. Nothing controversial there: he was saying "don't be dogmatic, adapt general guidelines to circumstances, and, above all, think, use your brain instead of a dogmatic/cookbook/words-of-guru approach".


Opinion: most code out there is crappy, because that's what the programmers WANT it to be.

Indirectly, we have been nurturing a culture of extreme creativeness. It's not that I don't think problem solving has creative elements -- it does -- it's just that it's not even remotely the same as something like painting (see Paul Graham's famous "Hackers and Painters" essay).

If we bend our industry towards that approach, ultimately it means letting every programmer go forth and whack out whatever highly creative, crazy stuff they want. Of course, for any sizable project, trying to put together dozens of unrelated, unstructured, unplanned bits into one final coherent bit won't work by definition. That's not a guess, or an estimate, it's the state of the industry that we face today. How many times have you seen sub-bits of functionality in a major program that were completely inconsistent with the rest of the code? It's so common now, it's a wonder anyone cause use any of these messes.

Convoluted, complicated, ugly stuff that just keeps getting worse and more unstable. If we were building something physical, everyone on the planet would call us out on how horribly ugly and screwed up the stuff is, but because it more or less hidden by being virtual, we are able to get away with some of the worst manufacturing processing that our species will ever see. (Can you imagine a car where four different people designed the four different wheels, in four different ways?)

But the sad part, the controversial part of it all, is that there is absolutely NO reason for it to be this way, other than historically the culture was towards more freedom and less organization, so we stayed that way (and probably got a lot worse). Software development is a joke, but it's a joke because that's what the programmers want it to be (but would never in a million years admit that it was true, a "plot by management" is a better reason for most people).

How long will we keep shooting ourselves in the foot, before we wake up and realize that we the ones holding the gun, pointing it and also pulling the trigger?

Paul.


You need to watch out for Object-Obsessed Programmers.

e.g. if you write a class that models built-in types such as ints or floats, you may be an object-obsessed programmer.


I'm always right.

Or call it design by discussion. But if I propose something, you'd had better be able to demonstrate why I'm wrong, and propose an alternative that you can defend.

Of course, this only works if I'm reasonable. Luckily for you, I am. :)


JavaScript is a "messy" language but god help me I love it.


Only write an abstraction if it's going to save 3X as much time later.

I see people write all these crazy abstractions sometimes and I think to myself, "Why?"

Unless an abstraction is really going to save you time later or it's going to save the person maintaining your code time, it seems people are just writing spaghetti code more and more.


People complain about removing 'goto' from the language. I happen to think that any sort of conditional jump is highly overrated and that 'if' 'while' 'switch' and a general purpose 'for' loop are highly overrated and should be used with extreme caution.

Everytime you make a comparison and conditional jump a tiny bit of complexity is added and this complexity adds up quickly once the call stack gets a couple hundred items deep.

My first choice is to avoid the conditional, but if it isn't practical my next preference is to keep the conditional complexity in constructors or factory methods.

Clearly this isn't practical for many projects and algorithms (like control flow loops), but it is something I enjoy pushing on.

-Rick


We're software developers, not C/C#/C++/PHP/Perl/Python/Java/... developers.

After you've been exposed to a few languages, picking up a new one and being productive with it is a small task. That is to say that you shouldn't be afraid of new languages. Of course, there is a large difference between being productive and mastering a language. But, that's no reason to shy away from a language you've never seen. It bugs me when people say, "I'm a PHP developer." or when a job offer says, "Java developer". After a few years experience of being a developer, new languages and APIs really shouldn't be intimidating and going from never seeing a language to being productive with it shouldn't take very long at all. I know this is controversial but it's my opinion.


Never let best practices or pattern obsessesion slave you.

These should be guidelines, not laws set in stone.

And I really like the patterns, and the GoF book more or less says it that way too, stuff to browse through, providing a common jargon. Not a ball and chain gospel.


Haven't tested it yet for controversy, but there may be potential:

The best line of code is the one you never wrote.


We do a lot of development here using a Model-View-Controller framework we built. I'm often telling my developers that we need to violate the rules of the MVC design pattern to make the site run faster. This is a hard sell for developers, who are usually unwilling to sacrifice well-designed code for anything. But performance is our top priority in building web applications, so sometimes we have to make concessions in the framework.

For example, the view layer should never talk directly to the database, right? But if you are generating large reports, the app will use a lot of memory to pass that data up through the model and controller layers. If you have a database that supports cursors, it can make the app a lot faster to hit the database directly from the view layer.

Performance trumps development standards, that's my controversial view.


...That the "clarification of ideas" should not be the sole responsibility of the developer...and yes xkcd made me use that specific phrase...

To often we are handed project's that are specified in psuedo-meta-sorta-kinda-specific "code" if you want to call it that. There are often product managers who draw up the initial requiements for a project and perform next to 0% of basic logic validation.

I'm not saying that the technical approach shouldn't be drawn up by the architect, or that the speicifc implemntation shouldn't be the responsibility of the developer, but rather that it should the requirement of the product manager to ensure that their requirements are logically feasible.

Personally I've been involved in too many "simple" projects that encounter a little scope creep here and there and then come across a "small" change or feature addition which contradicts previous requirements--whether implicitly or explicitly. In these cases it is all too easy for the person requesting the borderline-impossible change to become enraged that developers can't make their dream a reality.


Every developer should be familiar with the basic architecture of modern computers. This also applies to developers who target a virtual machine (maybe even more so, because they have been told time and time again that they don't need to worry themselves with memory management etc.)


The more process you put around programming, the worse the code becomes

I have noticed something in my 8 or so years of programming, and it seems ridiculous. It's that the only way to get quality is to employ quality developers, and remove as much process and formality from them as you can. Unit testing, coding standards, code/peer reviews, etc only reduce quality, not increase it. It sounds crazy, because the opposite should be true (more unit testing should lead to better code, great coding standards should lead to more readable code, code reviews should improve the quality of code) but it's not.

I think it boils down to the fact we call it "Software Engineering" when really it's design and not engineering at all.


Some numbers to substantiate this statement:

From the Editor

IEEE Software, November/December 2001

Quantifying Soft Factors

by Steve McConnell

...

Limited Importance of Process Maturity

... In comparing medium-size projects (100,000 lines of code), the one with the worst process will require 1.43 times as much effort as the one with the best process, all other things being equal. In other words, the maximum influence of process maturity on a project’s productivity is 1.43. ...

... What Clark doesn’t emphasize is that for a program of 100,000 lines of code, several human-oriented factors influence productivity more than process does. ...

... The seniority-oriented factors alone (AEXP, LTEX, PEXP) exert an influence of 3.02. The seven personnel-oriented factors collectively (ACAP, AEXP, LTEX, PCAP, PCON, PEXP, and SITE §) exert a staggering influence range of 25.8! This simple fact accounts for much of the reason that non-process-oriented organizations such as Microsoft, Amazon.com, and other entrepreneurial powerhouses can experience industry-leading productivity while seemingly shortchanging process. ...

The Bottom Line

... It turns out that trading process sophistication for staff continuity, business domain experience, private offices, and other human-oriented factors is a sound economic tradeoff. Of course, the best organizations achieve high motivation and process sophistication at the same time, and that is the key challenge for any leading software organization.

§ Read the article for an explanation of these acronyms.


The simplest approach is the best approach

Programmers like to solve assumed or inferred requirements that add levels of complexity to a solution.

"I assume this block of code is going to be a performance bottleneck, therefore I will add all this extra code to mitigate this problem."

"I assume the user is going to want to do X, therefore I will add this really cool additional feature."

"If I make my code solve for this unneeded scenario it will be a good opportunity to use this new technology I've been interested in trying out."

In reality, the simplest solution that meets the requirements is best. This also gives you the most flexibility in taking your solution in a new direction if and when new requirements or problems come up.


One class per file

Who cares? I much prefer entire programs contained in one file rather than a million different files.


Lower level languages are inappropriate for most problems.


It's ok to write garbage code once in a while

Sometimes a quick and dirty piece of garbage code is all that is needed to fulfill a particular task. Patterns, ORMs, SRP, whatever... Throw up a Console or Web App, write some inline sql ( feels good ), and blast out the requirement.


The use of hungarian notation should be punished with death.

That should be controversial enough ;)


Enable multiple checkout If we improve enough discipline of the developers, we will get much more efficiency from this setting by auto merge of source control.


Separation of concerns is evil :)

Only separate concerns if you have good reason for it. Otherwise, don't separate them.

I have encountered too many occasions of separation only for the sake of separation. The second half of Dijkstra's statement "Minimal coupling, maximal cohesion" should not be forgotten. :)

Happy to discuss this further.


If you can only think of one way to do it, don't do it.

Whether it's an interface layout, a task flow, or a block of code, just stop. Do something to collect more ideas, like asking other people how they would do it, and don't go back to implementing until you have at least three completely different ideas and at least one crisis of confidence.

Generally, when I think something can only be done one way, or think only one method has any merit, it's because I haven't thought through the factors which ought to be influencing the design thoroughly enough. If I had, some of them would clearly be in conflict, leading to a mess and thus an actual decision rather than a rote default.

Being a solid programmer does not make you a solid interface designer

And following all of the interface guidelines in the world will only begin to help. If it's even humanly possible... There seems to be a peculiar addiction to making things 'cute' and 'clever'.


My controversial opinion: Object Oriented Programming is absolutely the worst thing that's ever happened to the field of software engineering.

The primary problem with OOP is the total lack of a rigorous definition that everyone can agree on. This easily leads to implementations that have logical holes in them, or language like Java that adhere to this bizarre religious dogma about what OOP means, while forcing the programmer into doing all these contortions and "design patterns" just to work around the limitations of a particular OOP system.

So, OOP tricks the programmer into thinking they're making these huge productivity gains, that OOP is somehow a "natural" way to think, while forcing the programmer to type boatloads of unnecessary boilerplate.

Then since nobody knows what OOP actually means, we get vast amounts of time wasted on petty arguments about whether language X or Y is "truly OOP" or not, what bizarre cargo cultish language features are absolutely "essential" for a language to be considered "truly OOP".

Instead of demanding that this language or that language be "truly oop", we should be looking at what language features are shown by experiment, to actually increase productivity, instead of trying to force it into being some imagined ideal language, or indeed forcing our programs to conform to some platonic ideal of a "truly object oriented program".

Instead of insisting that our programs conform to some platonic ideal of "Truly object oriented", how about we focus on adhering to good engineering principles, making our code easy to read and understand, and using the features of a language that are productive and helpful, regardless of whether they are "OOP" enough or not.


A Clever Programmer Is Dangerous

I have spent more time trying to fix code written by "clever" programmers. I'd rather have a good programmer than an exceptionally smart programmer who wants to prove how clever he is by writing code that only he (or she) can interpret.


Agile sucks.


Opinion: Not having function definitions, and return types can lead to flexible and readable code.

This opinion probably applies more to interpreted languages than compiled. Requiring a return type, and a function argument list, are great for things like intellisense to auto document your code, but they are also restrictions.

Now don't get me wrong, I am not saying throw away return types, or argument lists. They have their place. And 90% of the time they are more of a benefit than a hindrance.

There are times and places when this is useful.


Automatic Updates Lead to Poorer Quality Software that is Less Secure

The Idea

A system to keep users' software up to date with the latest bug fixes and security patches.

The Reality

Products have to be shipped by fixed deadlines, often at the expense of QA. Software is then released with many bugs and security holes in order to meet the deadline in the knowledge that the 'Automatic Update' can be used to fix all the problems later.

Now, the piece of software that really made me think of this is VS2K5. At first, it was great, but as the updates were installed the software is slowly getting worse. The biggest offence was the loss of macros - I had spent a long time creating a set of useful VBA macros to automate some of the code I write - but apparently there was a security hole and instead of fixing it the macro system was disabled. Bang goes a really useful feature: recording keystrokes and repeated replaying of them.

Now, if I were really paranoid, I could see Automatic Updates as a way to get people to upgrade their software by slowly installing code that breaks the system more often. As the system becomes more unreliable, users are tempted to pay out for the next version with the promise of better reliablity and so on.

Skizz


Member variables should never be declared private (in java)

If you declare something private, you prevent any future developer from deriving from your class and extending the functionality. Essentially, by writing "private" you are implying that you know more now about how your class can be used than any future developer might ever know. Whenever you write "private", you ought to write "protected" instead.

Classes should never be declared final (in java)

Similarly, if you declare a class as final (which prevents it from being extended -- prevents it from being used as a base class for inheritance), you are implying that you know more than any future programmer might know, about what is the right and proper way to use your class. This never a good idea. You don't know everything. Someone might come up with a perfectly suitable way to extend your class that you didn't think of.

Java Beans are a terrible idea.

The java bean convention -- declaring all members as private and then writing get() and set() methods for every member -- forces programmers to write boilerplate, error-prone, tedious, and lengthy code, where no code is needed. Just make public members variables public! Trust in your ability to change it later, if you need to change the implementation (hint: 99% of the time, you never will).


Never make up your mind on an issue before thoroughly considering said issue. No programming standard EVER justifies approaching an issue in a poor manner. If the standard demands a class to be written, but after careful thought, you deem a static method to be more appropriate, always go with the static method. Your own discretion is always better than even the best forward thinking of whoever wrote the standard. Standards are great if you're working in a team, but rules are meant to be broken (in good taste, of course).


I don't care how powerful a programming language is if its syntax is not intuitive and I can't set it aside for some period of time and come back to it without too much effort at refreshing on the details. I would rather a language itself be intuitive than it be cryptic but powerful for creating DSL's. A computer language is a user interface for ME, and I want it designed for intuitive ease of use like any other user interface.


in almost all cases, comments are evil: http://gooddeveloper.wordpress.com/


If you need to read the manual, the software isn't good enough.

Plain and simple :-)


Realizing sometimes good enough is good enough, is a major jump in your value as a programmer.

Note that when I say 'good enough', I mean 'good enough', not it's some crap that happens to work. But then again, when you are under a time crunch, 'some crap that happens to work', may be considered 'good enough'.


Detailed designs are a waste of time, and if an engineer needs them in order to do a decent job, then it's not worth employing them!

OK, so a couple of ideas are thrown together here:

1) the old idea of waterfall development where you supposedly did all your design up front, resulting in some glorified extremely detailed class diagrams, sequence diagrams etc. etc., was a complete waste of time. As I once said to a colleague, I'll be done with design once the code is finished. Which I think is what agile is partly a recognition of - that the code is the design, and that any decent developer is continually refactoring. This of course, makes the idea that your class diagrams are out of date laughable - they always will be.

2) management often thinks that you can usefully take a poor engineer and use them as a 'code monkey' - in other words they're not particularly talented, but heck - can't you use them to write some code. Well.. no! If you have to spend so much time writing detailed specs that you're basically specifying the code, then it will be quicker to write it yourself. You're not saving any time. If a developer isn't smart enough to use their own imagination and judgement they're not worth employing. (Note, I'm not talking about junior engineers who are able to learn. Plenty of 'senior engineers' fall into this category.)


Development teams should be segregated more often by technological/architectural layers instead of business function.

I come from a general culture where developers own "everything from web page to stored procedure". So in order to implement a feature in the system/application, they would prepare the database table schemas, write the stored procs, match the data access code, implement the business logic and web service methods, and the web page interfaces.

And guess what? Everybody has their own way to doing things! Everyone struggles to learn the ASP.NET AJAX and Telerik or Infragistic suites, Enterprise Library or other productivity and data layer and persistence frameworks, Aspect-oriented frameworks, logging and caching application blocks, DB2 or Oracle percularities. And guess what? Everybody takes heck of a long time to learn how to do things the proper way! Meaning, lots of mistakes in the meantime and plenty of resulting defects and performance bottlenecks! And heck of a longer time to fix them! Across each and every layer! Everybody has a hand in every Visual Studio project. Nobody is specialised to handle and optmise one problem/technology domain. Too many chefs spoil the soup. All the chefs result in some radioactive goo.

Developers may have cross-layer/domain responsibilities, but they should not pretend that they can be masters of all disciplines, and should be limited to only a few. In my experience, when a project is not a small one and utilises lots of technologies, covering more business functions in a single layer is more productive (as well as encouraging more test code test that layer) than covering less business functions spanning the entire architectural stack (which motivates developers to test only via their UI and not test code).


UML diagrams are highly overrated

Of course there are useful diagrams e.g. class diagram for the Composite Pattern, but many UML diagrams have absolutely no value.


Only write an abstraction if it's going to save 3X as much time later.

I see people write all these crazy abstractions sometimes and I think to myself, "Why?"

Unless an abstraction is really going to save you time later or it's going to save the person maintaining your code time, it seems people are just writing spaghetti code more and more.


Don't comment your code

Comments are not code and therefore when things change it's very easy to not change the comment that explained the code. Instead I prefer to refactor the crap out of code to a point that there is no reason for a comment. An example:

if(data == null)  // First time on the page

to:

bool firstTimeOnPage = data == null;
if(firstTimeOnPage)

The only time I really comment is when it's a TODO or explaining why

Widget.GetData(); // only way to grab data, TODO: extract interface or wrapper

Python does everything that other programming languages do in half the dev time... and so does Google!!! Check out Unladen Swallow if you disagree.

Wait, this is a fact. Does it still qualify as an answer to this question?


I think Java should have supported system-specific features via thin native library wrappers.

Phrased another way, I think Sun's determination to require that Java only support portable features was a big mistake from almost everyone's perspective.

A zillion years later, SWT came along and solved the basic problem of writing a portable native-widget UI, but by then Microsoft was forced to fork Java into C# and lots of C++ had been written that could otherwise have been done in civilized Java. Now the world runs on a blend of C#, VB, Java, C++, Ruby, Python and Perl. All the Java programs still look and act wierd except for the SWT ones.

If Java had come out with thin wrappers around native libraries, people could have written the SWT-equivalent entirely in Java, and we could have, as things evolved, made portable apparently-native apps in Java. I'm totally for portable applications, but it would have been better if that portability were achieved in an open market of middleware UI (and other feature) libraries, and not through simply reducing the user's menu to junk or faking the UI with Swing.

I suppose Sun thought that ISV's would suffer with Java's limitations and then all the world's new PC apps would magically run on Suns. Nice try. They ended up not getting the apps AND not having the language take off until we could use it for logic-only server back-end code.

If things had been done differently maybe the local application wouldn't be, well, dead.


Manually halting a program is an effective, proven way to find performance problems.

Believable? Not to most. True? Absolutely.

Programmers are far more judgmental than necessary.

Witness all the things considered "evil" or "horrible" in these posts.

Programmers are data-structure-happy.

Witness all the discussions of classes, inheritance, private-vs-public, memory management, etc., versus how to analyze requirements.


Member variables should never be declared private (in java)

If you declare something private, you prevent any future developer from deriving from your class and extending the functionality. Essentially, by writing "private" you are implying that you know more now about how your class can be used than any future developer might ever know. Whenever you write "private", you ought to write "protected" instead.

Classes should never be declared final (in java)

Similarly, if you declare a class as final (which prevents it from being extended -- prevents it from being used as a base class for inheritance), you are implying that you know more than any future programmer might know, about what is the right and proper way to use your class. This never a good idea. You don't know everything. Someone might come up with a perfectly suitable way to extend your class that you didn't think of.

Java Beans are a terrible idea.

The java bean convention -- declaring all members as private and then writing get() and set() methods for every member -- forces programmers to write boilerplate, error-prone, tedious, and lengthy code, where no code is needed. Just make public members variables public! Trust in your ability to change it later, if you need to change the implementation (hint: 99% of the time, you never will).


Not very controversial AFAIK but... AJAX was around way before the term was coined and everyone needs to 'let it go'. People were using it for all sorts of things. No one really cared about it though.

Then suddenly POW! Someone coined the term and everyone jumped on the AJAX bandwagon. Suddenly people are now experts in AJAX, as if 'experts' in dynamically loading data weren't around before. I think its one of the biggest contributing factors that is leading to the brutal destruction of the internet. That and "Web 2.0".


I don't believe that any question related to optimization should be flooded with a chant of the misquoted "Premature optimization is the root of all evil"s because code that is optimized into obfuscation is what makes coding fun


If a developer cannot write clear, concise and grammatically correct comments then they should have to go back and take English 101.

We have developers and (the horror) architects who cannot write coherently. When their documents are reviewed they say things like "oh, don't worry about grammatical errors or spelling - that's not important". Then they wonder why their convoluted garbage documents become convoluted buggy code.

I tell the interns that I mentor that if you can't communicate your great ideas verbally or in writing you may as well not have them.


Logger configs are a waste of time. Why have them if it means learning a new syntax, especially one that fails silently? Don't get me wrong, I love good logging. I love logger inheritance and adding formatters to handlers to loggers. But why do it in a config file?

Do you want to make changes to logging code without recompiling? Why? If you put your logging code in a separate class, file, whatever, what difference will it make?

Do you want to distribute a configurable log with your product to clients? Doesn't this just give too much information anyway?

The most frustrating thing about it is that popular utilities written in a popular language tend to write good APIs in the format that language specifies. Write a Java logging utility and I know you've generated the javadocs, which I know how to navigate. Write a domain specific language for your logger config and what do we have? Maybe there's documentation, but where the heck is it? You decide on a way to organize it, and I'm just not interested in following your line of thought.


Many developers have an underdeveloped sense of where to put things, resulting in messy source code organization at the file, class, and method level. Further, a sizable percentage of such developers are essentially tone-deaf to issues of code organization. Attempts to teach, cajole, threaten, or shame them into keeping their code clean are futile.

On any sufficiently successful project, there's usually a developer who does have a good sense of organization very quietly wielding a broom to the code base to keep entropy at bay.


Not all programmers are created equal

Quite often managers think that DeveloperA == DeveloperB simply because they have same level of experience and so on. In actual fact, the performance of one developer can be 10x or even 100x that of another.

It's politically risky to talk about it, but sometimes I feel like pointing out that, even though several team members may appear to be of equal skill, it's not always the case. I have even seen cases where lead developers were 'beyond hope' and junior devs did all the actual work - I made sure they got the credit, though. :)


Microsoft Windows is the best platform for software development.

Reasoning: Microsoft spoils its developers with excellent and cheap development tools, the platform and its API's are well documented, the platform is evolving at a rappid rate which creates a lot of opportunities for developers, The OS has a large user base which is important for obvious commercial reasons, there is a big community of Windows developers, I haven't yet been fired for choosing Microsoft.


Hardcoding is good!

Really ,more efficient and much easier to maintain in many cases!

The number of times I've seen constants put into parameter files really how often will you change the freezing point of water or the speed of light?

For C programs just hard code these type of values into a header file, for java into a static class etc.

When these parameters have a drastic effect on your programs behaviour you really want to do a regresion test on every change, this seems more natural with hard coded values. When things are stored in parameter/property files the temptation is to think "this is not a program cahnge so I dont need to test it".

The other advantage is it stops people messing with vital values in the parameter/property files because there aren't any!


Useful and clean high-level abstractions are significantly more important than performance

one example:

Too often I watch peers spending hours writing over complicated Sprocs, or massive LINQ queries which return unintuitive anonymous types for the sake of "performance".

They could achieve almost the same performance but with considerably cleaner, intuitive code.



There is no "one size fits all" approach to development

I'm surprised that this is a controversial opinion, because it seems to me like common sense. However, there are many entries on popular blogs promoting the "one size fits all" approach to development so I think I may actually be in the minority.

Things I've seen being touted as the correct approach for any project - before any information is known about it - are things like the use of Test Driven Development (TDD), Domain Driven Design (DDD), Object-Relational Mapping (ORM), Agile (capital A), Object Orientation (OO), etc. etc. encompassing everything from methodologies to architectures to components. All with nice marketable acronyms, of course.

People even seem to go as far as putting badges on their blogs such as "I'm Test Driven" or similar, as if their strict adherence to a single approach whatever the details of the project project is actually a good thing.

It isn't.

Choosing the correct methodologies and architectures and components, etc., is something that should be done on a per-project basis, and depends not only on the type of project you're working on and its unique requirements, but also the size and ability of the team you're working with.


Unit Testing won't help you write good code

The only reason to have Unit tests is to make sure that code that already works doesn't break. Writing tests first, or writing code to the tests is ridiculous. If you write to the tests before the code, you won't even know what the edge cases are. You could have code that passes the tests but still fails in unforeseen circumstances.

And furthermore, good developers will keep cohesion low, which will make the addition of new code unlikely to cause problems with existing stuff.

In fact, I'll generalize that even further,

Most "Best Practices" in Software Engineering are there to keep bad programmers from doing too much damage.

They're there to hand-hold bad developers and keep them from making dumbass mistakes. Of course, since most developers are bad, this is a good thing, but good developers should get a pass.


Microsoft Windows is the best platform for software development.

Reasoning: Microsoft spoils its developers with excellent and cheap development tools, the platform and its API's are well documented, the platform is evolving at a rappid rate which creates a lot of opportunities for developers, The OS has a large user base which is important for obvious commercial reasons, there is a big community of Windows developers, I haven't yet been fired for choosing Microsoft.


"Everything should be made as simple as possible, but not simpler." - Einstein.


Nobody Cares About Your Code

If you don't work on a government security clearance project and you're not in finance, odds are nobody cares what you're working on outside of your company/customer base. No one's sniffing packets or trying to hack into your machine to read your source code. This doesn't mean we should be flippant about security, because there are certainly a number of people who just want to wreak general havoc and destroy your hard work, or access stored information your company may have such as credit card data or identity data in bulk. However, I think people are overly concerned about other people getting access to your source code and taking your ideas.


Opinion: There should not be any compiler warnings, only errors. Or, formulated differently You should always compile your code with -Werror.

Reason: Either the compiler thinks it is something which should be corrected, in case it should be an error, or it is not necessary to fix, in which case the compiler should just shut up.


Exceptions considered harmful.


"Don't call virtual methods from constructors". This is only sometimes a PITA, but is only so because in C# I cannot decide at which point in a constructor to call my base class's constructor. Why not? The .NET framework allows it, so what good reason is there for C# to not allow it?

Damn!


Preconditions for arguments to methods/functions should be part of the language rather than programmers checking it always.


Upfront design - don't just start writing code because you're excited to write code

I've seen SO many apps that are poorly designed because the developer was so excited to get coding that they just opened up a white page and started writing code. I understand that things change during the development lifecycle. However, it's difficult working with applications that have several different layouts and development methodologies from form to form, method to method.

It's difficult to hit the target your application is to handle if you haven't clearly defined the task and how you plan to code it. Take some time (and not just 5 minutes) and make sure you've laid out as much of it has you can before you start coding. This way you'll avoid a spaghetti mess that your replacement will have to support.


According to the amount of feedback I've gotten, my most controversial opinion, apparently, is that programmers don't always read the books they claim to have read. This is followed closely by my opinion that a programmer with a formal education is better than the same programmer who is self-taught (but not necessarily better than a different programmer who is self-taught).


Never implement anything as a singleton.

You can decide not to construct more than one instance, but always ensure you implementation can handle more.

I have yet to find any scenario where using a singleton is actually the right thing to do.

I got into some very heated discussions over this in the last few years, but in the end I was always right.


C++ is a good language

I practically got lynched in another question a week or two back for saying that C++ wasn't a very nice language. So now I'll try saying the opposite. ;)

No, seriously, the point I tried to make then, and will try again now, is that C++ has plenty of flaws. It's hard to deny that. It's so extremely complicated that learning it well is practically something you can dedicate your entire life to. It makes many common tasks needlessly hard, allows the user to plunge head-first into a sea of undefined behavior and unportable code, with no warnings given by the compiler.

But it's not the useless, decrepit, obsolete, hated language that many people try to make it. It shouldn't be swept under the carpet and ignored. The world wouldn't be a better place without it. It has some unique strengths that, unfortunately, are hidden behind quirky syntax, legacy cruft and not least, bad C++ teachers. But they're there.

C++ has many features that I desperately miss when programming in C# or other "modern" languages. There's a lot in it that C# and other modern languages could learn from.

It's not blindly focused on OOP, but has instead explored and pioneered generic programming. It allows surprisingly expressive compile-time metaprogramming producing extremely efficient, robust and clean code. It took in lessons from functional programming almost a decade before C# got LINQ or lambda expressions.

It allows you to catch a surprising number of errors at compile-time through static assertions and other metaprogramming tricks, which eases debugging vastly, and even beats unit tests in some ways. (I'd much rather catch an error at compile-time than afterwards, when I'm running my tests).

Deterministic destruction of variables allows RAII, an extremely powerful little trick that makes try/finally blocks and C#'s using blocks redundant.

And while some people accuse it of being "design by committee", I'd say yes, it is, and that's actually not a bad thing in this case. Look at Java's class library. How many classes have been deprecated again? How many should not be used? How many duplicate each others' functionality? How many are badly designed?

C++'s standard library is much smaller, but on the whole, it's remarkably well designed, and except for one or two minor warts (vector<bool>, for example), its design still holds up very well. When a feature is added to C++ or its standard library, it is subjected to heavy scrutiny. Couldn't Java have benefited from the same? .NET too, although it's younger and was somewhat better designed to begin with, is still accumulating a good handful of classes that are out of sync with reality, or were badly designed to begin with.

C++ has plenty of strengths that no other language can match. It's a good language


Less code is better than more!

If the users say "that's it?", and your work remains invisible, it's done right. Glory can be found elsewhere.


You can't write a web application without a remote debugger

Web applications typically tie together interactions between multiple languages on the client and server side, require interaction from a user and often include third-party code that can be anything from a simple API implementation to a byzantine framework.

I've lost count of the number of times I've had another developer sat with me while I step into and follow through what's actually going on in a complex web application with a decent remote debugger to see them flabbergasted and amazed that such tools exist. Often they still don't take the trouble to install and setup these kinds of tools even after seeing them in action.

You just can't debug a non trivial web application with print statements. Times ten if you didn't right all the code in your application.

If your debugger can step through all the various languages in use and show you the http transactions taking place then so much the better.

You can't develop web applications without Firebug

Along similar lines, once you have used Firebug (or very near equivalent) you look on anyone trying to develop web applications with a mixture of pity and horror. Particularly with Firebug showing computed styles, if you remember back to NOT using it and spending hours randomly changing various bits of CSS and adding "!important" in too many places to be funny you will never go back.


Bad Programmers are Language-Agnostic

A really bad programmer can write bad code in almost any language.


You don't have to program everything

I'm getting tired that everything, but then everything needs to be stuffed in a program, like that is always faster. everything needs to be webbased, evrything needs to be done via a computer. Please, just use your pen and paper. it's faster and less maintenance.


You can't write a web application without a remote debugger

Web applications typically tie together interactions between multiple languages on the client and server side, require interaction from a user and often include third-party code that can be anything from a simple API implementation to a byzantine framework.

I've lost count of the number of times I've had another developer sat with me while I step into and follow through what's actually going on in a complex web application with a decent remote debugger to see them flabbergasted and amazed that such tools exist. Often they still don't take the trouble to install and setup these kinds of tools even after seeing them in action.

You just can't debug a non trivial web application with print statements. Times ten if you didn't right all the code in your application.

If your debugger can step through all the various languages in use and show you the http transactions taking place then so much the better.

You can't develop web applications without Firebug

Along similar lines, once you have used Firebug (or very near equivalent) you look on anyone trying to develop web applications with a mixture of pity and horror. Particularly with Firebug showing computed styles, if you remember back to NOT using it and spending hours randomly changing various bits of CSS and adding "!important" in too many places to be funny you will never go back.


The world needs more GOTOs

GOTOs are avoided religiously often with no reasoning beyond "my professor told me GOTOs are bad." They have a purpose and would greatly simplify production code in many places.

That said, they aren't really necessary in 99% of the code you'll ever write.


Every developer should be familiar with the basic architecture of modern computers. This also applies to developers who target a virtual machine (maybe even more so, because they have been told time and time again that they don't need to worry themselves with memory management etc.)


The class library guidelines for implementing IDisposable are wrong.

I don't share this too often, but I believe that the guidance for the default implementation for IDisposable is completely wrong.

My issue isn't with the overload of Dispose and then removing the item from finalization, but rather, I despise how there is a call to release the managed resources in the finalizer. I personally believe that an exception should be thrown (and yes, with all the nastiness that comes from throwing it on the finalizer thread).

The reasoning behind it is that if you are a client or server of IDisposable, there is an understanding that you can't simply leave the object lying around to be finalized. If you do, this is a design/implementation flaw (depending on how it is left lying around and/or how it is exposed), as you are not aware of the lifetime of instances that you should be aware of.

I think that this type of bug/error is on the level of race conditions/synchronization to resources. Unfortunately, with calling the overload of Dispose, that error is never materialized.

Edit: I've written a blog post on the subject if anyone is interested:

http://www.caspershouse.com/post/A-Better-Implementation-Pattern-for-IDisposable.aspx


Once i saw the following from a co-worker:

equal = a.CompareTo(b) == 0;

I stated that he cannot assume that in a general case, but he just laughed.


Microsoft Windows is the best platform for software development.

Reasoning: Microsoft spoils its developers with excellent and cheap development tools, the platform and its API's are well documented, the platform is evolving at a rappid rate which creates a lot of opportunities for developers, The OS has a large user base which is important for obvious commercial reasons, there is a big community of Windows developers, I haven't yet been fired for choosing Microsoft.


Null references should be removed from OO languages

Coming from a Java and C# background, where its normal to return null from a method to indicate a failure, I've come to conclude that nulls cause a lot of avoidable problems. Language designers can remove a whole class of errors relate to NullRefernceExceptions if they simply eliminate null references from code.

Additionally, when I call a method, I have no way of knowing whether that method can return null references unless I actually dig in the implementation. I'd like to see more languages follow F#'s model for handling nulls: F# doesn't allow programmers to return null references (at least for classes compiled in F#), instead it requires programmers to represent empty objects using option types. The nice thing about this design is how useful information, such as whether a function can return null references, is propagated through the type system: functions which return a type 'a have a different return type than functions which return 'a option.


As there are hundreds of answers to this mine will probably end up unread, but here's my pet peeve anyway.

If you're a programmer then you're most likely awful at Web Design/Development

This website is a phenomenal resource for programmers, but an absolutely awful place to come if you're looking for XHTML/CSS help. Even the good Web Developers here are handing out links to resources that were good in the 90's!

Sure, XHTML and CSS are simple to learn. However, you're not just learning a language! You're learning how to use it well, and very few designers and developers can do that, let alone programmers. It took me ages to become a capable designer and even longer to become a good developer. I could code in HTML from the age of 10 but that didn't mean I was good. Now I am a capable designer in programs like Photoshop and Illustrator, I am perfectly able to write a good website in Notepad and am able to write basic scripts in several languages. Not only that but I have a good nose for Search Engine Optimisation techniques and can easily tell you where the majority of people are going wrong (hint: get some good content!).

Also, this place is a terrible resource for advice on web standards. You should NOT just write code to work in the different browsers. You should ALWAYS follow the standard to future-proof your code. More often than not the fixes you use on your websites will break when the next browser update comes along. Not only that but the good browsers follow standards anyway. Finally, the reason IE was allowed to ruin the Internet was because YOU allowed it by coding your websites for IE! If you're going to continue to do that for Firefox then we'll lose out yet again!

If you think that table-based layouts are as good, if not better than CSS layouts then you should not be allowed to talk on the subject, at least without me shooting you down first. Also, if you think W3Schools is the best resource to send someone to then you're just plain wrong.

If you're new to Web Design/Development don't bother with this place (it's full of programmers, not web developers). Go to a good Web Design/Development community like SitePoint.


Opinion: That frameworks and third part components should be only used as a last resort.

I often see programmers immediately pick a framework to accomplish a task without learning the underlying approach it takes to work. Something will inevitably break, or we'll find a limition we didn't account for and we'll be immediately stuck and have to rethink major part of a system. Frameworks are fine to use as long it is carefully thought out.


Never change what is not broken.


  • Xah Lee: actually has some pretty noteworthy and legitimate viewpoints if you can filter out all the invective, and rationally evaluate statements without agreeing (or disagreeing) based solely on the personality behind the statements. A lot of my "controversial" viewpoints have been echoed by him, and other notorious "trolls" who have criticized languages or tools I use(d) on a regular basis.

  • [Documentation Generators](http://en.wikipedia.or /wiki/Comparison_of_documentation_generators): ... the kind where the creator invented some custom-made especially-for-documenting-sourcecode roll-your-own syntax (including, but not limited to JavaDoc) are totally superfluous and a waste of time because:

    • 1) They are underused by the people who should be using them the most; and
    • 2) All of these mini-documentation-languages all of them could easily be replaced with YAML

The latest design patterns tend to be so much snake oil. As has been said previously in this question, overuse of design patterns can harm a design much more than help it.

If I hear one more person saying that "everyone should be using IOC" (or some similar pile of turd), I think I'll be forced to hunt them down and teach them the error of their ways.


If you have ever let anyone from rentacoder.com touch your project, both it and your business are completely devoid of worth.


You don't always need a database.

If you need to store less than a few thousand "things" and you don't need locking, flat files can work and are better in a lot of ways. They are more portable, and you can hand edit them in a pinch. If you have proper separation between your data and business logic, you can easily replace the flat files with a database if your app ever needs it. And if you design it with this in mind, it reminds you to have proper separation between your data and business logic.

--
bmb


1) The Business Apps farce:

I think that the whole "Enterprise" frameworks thing is smoke and mirrors. J2EE, .NET, the majority of the Apache frameworks and most abstractions to manage such things create far more complexity than they solve.

Take any regular Java or .NET ORM, or any supposedly modern MVC framework for either which does "magic" to solve tedious, simple tasks. You end up writing huge amounts of ugly XML boilerplate that is difficult to validate and write quickly. You have massive APIs where half of those are just to integrate the work of the other APIs, interfaces that are impossible to recycle, and abstract classes that are needed only to overcome the inflexibility of Java and C#. We simply don't need most of that.

How about all the different application servers with their own darned descriptor syntax, the overly complex database and groupware products?

The point of this is not that complexity==bad, it's that unnecessary complexity==bad. I've worked in massive enterprise installations where some of it was necessary, but even in most cases a few home-grown scripts and a simple web frontend is all that's needed to solve most use cases.

I'd try to replace all of these enterprisey apps with simple web frameworks, open source DBs, and trivial programming constructs.

2) The n-years-of-experience-required:

Unless you need a consultant or a technician to handle a specific issue related to an application, API or framework, then you don't really need someone with 5 years of experience in that application. What you need is a developer/admin who can read documentation, who has domain knowledge in whatever it is you're doing, and who can learn quickly. If you need to develop in some kind of language, a decent developer will pick it up in less than 2 months. If you need an administrator for X web server, in two days he should have read the man pages and newsgroups and be up to speed. Anything less and that person is not worth what he is paid.

3) The common "computer science" degree curriculum:

The majority of computer science and software engineering degrees are bull. If your first programming language is Java or C#, then you're doing something wrong. If you don't get several courses full of algebra and math, it's wrong. If you don't delve into functional programming, it's incomplete. If you can't apply loop invariants to a trivial for loop, you're not worth your salt as a supposed computer scientist. If you come out with experience in x and y languages and object orientation, it's full of s***. A real computer scientist sees a language in terms of the concepts and syntaxes it uses, and sees programming methodologies as one among many, and has such a good understanding of the underlying philosophies of both that picking new languages, design methods, or specification languages should be trivial.


Two brains think better than one

I firmly believe that pair programming is the number one factor when it comes to increasing code quality and programming productivity. Unfortunatly it is also a highly controversial for management who believes that "more hands => more code => $$$!"


Remove classes. Number of classes (methods of classes) in .NET Framework handles exception implicitly. It's difficult to work with a dumb person.


Many developers have an underdeveloped sense of where to put things, resulting in messy source code organization at the file, class, and method level. Further, a sizable percentage of such developers are essentially tone-deaf to issues of code organization. Attempts to teach, cajole, threaten, or shame them into keeping their code clean are futile.

On any sufficiently successful project, there's usually a developer who does have a good sense of organization very quietly wielding a broom to the code base to keep entropy at bay.


Opinion: most code out there is crappy, because that's what the programmers WANT it to be.

Indirectly, we have been nurturing a culture of extreme creativeness. It's not that I don't think problem solving has creative elements -- it does -- it's just that it's not even remotely the same as something like painting (see Paul Graham's famous "Hackers and Painters" essay).

If we bend our industry towards that approach, ultimately it means letting every programmer go forth and whack out whatever highly creative, crazy stuff they want. Of course, for any sizable project, trying to put together dozens of unrelated, unstructured, unplanned bits into one final coherent bit won't work by definition. That's not a guess, or an estimate, it's the state of the industry that we face today. How many times have you seen sub-bits of functionality in a major program that were completely inconsistent with the rest of the code? It's so common now, it's a wonder anyone cause use any of these messes.

Convoluted, complicated, ugly stuff that just keeps getting worse and more unstable. If we were building something physical, everyone on the planet would call us out on how horribly ugly and screwed up the stuff is, but because it more or less hidden by being virtual, we are able to get away with some of the worst manufacturing processing that our species will ever see. (Can you imagine a car where four different people designed the four different wheels, in four different ways?)

But the sad part, the controversial part of it all, is that there is absolutely NO reason for it to be this way, other than historically the culture was towards more freedom and less organization, so we stayed that way (and probably got a lot worse). Software development is a joke, but it's a joke because that's what the programmers want it to be (but would never in a million years admit that it was true, a "plot by management" is a better reason for most people).

How long will we keep shooting ourselves in the foot, before we wake up and realize that we the ones holding the gun, pointing it and also pulling the trigger?

Paul.


Getters and Setters are Highly Overused

I've seen millions of people claiming that public fields are evil, so they make them private and provide getters and setters for all of them. I believe this is almost identical to making the fields public, maybe a bit different if you're using threads (but generally is not the case) or if your accessors have business/presentation logic (something 'strange' at least).

I'm not in favor of public fields, but against making a getter/setter (or Property) for everyone of them, and then claiming that doing that is encapsulation or information hiding... ha!

UPDATE:

This answer has raised some controversy in it's comments, so I'll try to clarify it a bit (I'll leave the original untouched since that is what many people upvoted).

First of all: anyone who uses public fields deserves jail time

Now, creating private fields and then using the IDE to automatically generate getters and setters for every one of them is nearly as bad as using public fields.

Many people think:

private fields + public accessors == encapsulation

I say (automatic or not) generation of getter/setter pair for your fields effectively goes against the so called encapsulation you are trying to achieve.

Lastly, let me quote Uncle Bob in this topic (taken from chapter 6 of "Clean Code"):

There is a reason that we keep our variables private. We don't want anyone else to depend on them. We want the freedom to change their type or implementation on a whim or an impulse. Why, then, do so many programmers automatically add getters and setters to their objects, exposing their private fields as if they were public?


Programmers take their (own little limited stupid) programming language as a sacrosanct religion.

Its so funny how programmers take these discussions almost like religious believers do: no critics allowed, (often) no objective discussion, (very often) arguing based upon very limited or absent knowledge and information. For a confirmation, just read the previous answers, and especially the comments.

Also funny and another confirmation: by definition of the question "give me a controversial opinion", any controversion opinion should NOT qualify for negative votes - actually the opposite: the more controversial, the better. But how do our programmers react: like Pavlov's dogs, voting negative on disliked opinions.

PS: I upvoted some others for fairness.


Not really programming, but I can't stand css only layouts just for the sake of it. It's counter productive, frustrating, and makes maintenance a nightmare of floats and margins where changing the position of a single element can throw the entire page out of whack.

It's definitely not a popular opinion, but i'm done with my table layout in 20 minutes while the css gurus spend hours tweaking line-height, margins, padding and floats just to do something as basic as vertically centering a paragraph.


Most consulting programmers suck and should not be allowed to write production code.

IMHO-Probably about 60% or more


Using Stored Procedures

Unless you are writing a large procedural function composed of non-reusable SQL queries, please move your stored procedures of the database and into version control.


Print statements are a valid way to debug code

I believe it is perfectly fine to debug your code by littering it with System.out.println (or whatever print statement works for your language). Often, this can be quicker than debugging, and you can compare printed outputs against other runs of the app.

Just make sure to remove the print statements when you go to production (or better, turn them into logging statements)


Programming: It's a fun job.

I seem to see two generalized groups of developers. Those that don't love it but they are competent and the money is good. The other group that love it to a point that is kinda creepy. It seems to be their life.

I just think it well paying job that is usually interesting and fun. There is all kinds of room to learn something new every minute of every day. I can't think of another job I would prefer. But it is still a job. Compromises will be made and the stuff you produce will not always be as good as it could be.

Given my druthers would be on a beach drinking beer or playing with my kids.


If you have ever let anyone from rentacoder.com touch your project, both it and your business are completely devoid of worth.


If I were being controversial, I'd have to suggest that Jon Skeet isn't omnipotent..


VB sucks
While not terribly controversial in general, when you work in a VB house it is


I really dislike when people tell me to use getters and setters instead of making the variable public when you should be able to both get and set the class variable.

I totally agree on it if it's to change a variable in an object in your object, so you don't get things like: a.b.c.d.e = something; but I would rather use: a.x = something; then a.setX(something); I think a.x = something; actually are both easier to read, and prettier then set/get in the same example.

I don't see the reason by making:

void setX(T x) { this->x = x; }

T getX() { return x; }

which is more code, more time when you do it over and over again, and just makes the code harder to read.


Code as Design: Three Essays by Jack W. Reeves

The source code of any software is its most accurate design document. Everything else (specs, docs, and sometimes comments) is either incorrect, outdated or misleading.

Guaranteed to get you fired pretty much everywhere.


Opinion: Data driven design puts the cart before the horse. It should be eliminated from our thinking forthwith.

The vast majority of software isn't about the data, it's about the business problem we're trying to solve for our customers. It's about a problem domain, which involves objects, rules, flows, cases, and relationships.

When we start our design with the data, and model the rest of the system after the data and the relationships between the data (tables, foreign keys, and x-to-x relationships), we constrain the entire application to how the data is stored in and retrieved from the database. Further, we expose the database architecture to the software.

The database schema is an implementation detail. We should be free to change it without having to significantly alter the design of our software at all. The business layer should never have to know how the tables are set up, or if it's pulling from a view or a table, or getting the table from dynamic SQL or a stored procedure. And that type of code should never appear in the presentation layer.

Software is about solving business problems. We deal with users, cars, accounts, balances, averages, summaries, transfers, animals, messsages, packages, carts, orders, and all sorts of other real tangible objects, and the actions we can perform on them. We need to save, load, update, find, and delete those items as needed. Sometimes, we have to do those things in special ways.

But there's no real compelling reason that we should take the work that should be done in the database and move it away from the data and put it in the source code, potentially on a separate machine (introducing network traffic and degrading performance). Doing so means turning our backs on the decades of work that has already been done to improve the performance of stored procedures and functions built into databases. The argument that stored procedures introduce "yet another API" to be manged is specious: of course it does; that API is a facade that shields you from the database schema, including the intricate details of primary and foreign keys, transactions, cursors, and so on, and it prevents you from having to splice SQL together in your source code.

Put the horse back in front of the cart. Think about the problem domain, and design the solution around it. Then, derive the data from the problem domain.


Here's mine:

"You don't need (textual) syntax to express objects and their behavior."

I subscribe to the ideas of Jonathan Edwards and his Subtext project - http://alarmingdevelopment.org/


This one is not exactly on programming, because html/css are not programming languages.

Tables are ok for layout

css and divs can't do everything, save yourself the hassle and use a simple table, then use css on top of it.


The vast majority of software being developed does not involve the end-user when gathering requirements.

Usually it's just some managers who are providing 'requirements'.


(Unnamed) tuples are evil

  • If you're using tuples as a container for several objects with unique meanings, use a class instead.
  • If you're using them to hold several objects that should be accessible by index, use a list.
  • If you're using them to return multiple values from a method, use Out parameters instead (this does require that your language supports pass-by-reference)

  • If it's part of a code obfuscation strategy, keep using them!

I see people using tuples just because they're too lazy to bother giving NAMES to their objects. Users of the API are then forced to access items in the tuple based on a meaningless index instead of a useful name.


Anonymous functions suck.

I'm teaching myself jQuery and, while it's an elegant and immensely useful technology, most people seem to treat it as some kind of competition in maximizing the user of anonymous functions.

Function and procedure naming (along with variable naming) is the greatest expressive ability we have in programming. Passing functions around as data is a great technique, but making them anonymous and therefore non-self-documenting is a mistake. It's a lost chance for expressing the meaning of the code.


I'd rather be truly skilled/experienced in an older technology that allows me to solve real world problems effectively, as opposed to new "fashionable" technologies that still going through the adolescent stage.


"Programmers are born, not made."


Inheritance is evil and should be deprecated.

The truth is aggregation is better in all cases. Static typed OOP languages can't avoid inheritance, it's the only way to describe what method wants from a type. But dynamic languages and duck typing can live without it. Ruby mixins is much more powerful then inheritance and a lot more controllable.


Don't be shy, throw an exception. Exceptions are a perfectly valid way to signal failure, and are much clearer than any return-code system. "Exceptional" has nothing to do with how often this can happen, and everything to do with what the class considers normal execution conditions. Throwing an exception when a division by zero occurs is just fine, regardless of how often the case can happen. If the problem is likely, guard your code so that the method doesn't get called with incorrect arguments.


Requirements analysis, specification, design, and documentation will almost never fit into a "template." You are 100% of the time better off by starting with a blank document and beginning to type with a view of "I will explain this in such a way that if I were dead and someone else read this document, they would know everything that I know and see and understand now" and then organizing from there, letting section headings and such develop naturally and fit the task you are specifying, rather than being constrained to some business or school's idea of what your document should look like. If you have to do a diagram, rather than using somebody's formal and incomprehensible system, you're often better off just drawing a diagram that makes sense, with a clear legend, which actually specifies the system you are trying to specify and communicates the information that the developer on the other end (often you, after a few years) needs to receive.

[If you have to, once you've written the real documentation, you can often shoehorn it into whatever template straightjacket your organization is imposing on you. You'll probably find yourself having to add section headings and duplicate material, though.]

The only time templates for these kinds of documents make sense is when you have a large number of tasks which are very similar in nature, differing only in details. "Write a program to allow single-use remote login access through this modem bank, driving the terminal connection nexus with C-Kermit," "Produce a historical trend and forecast report for capacity usage," "Use this library to give all reports the ability to be faxed," "Fix this code for the year 2000 problem," and "Add database triggers to this table to populate a software product provided for us by a third-party vendor" can not all be described by the same template, no matter what people may think. And for the record, the requirements and design diagramming techniques that my college classes attempted to teach me and my classmates could not be used to specify a simple calculator program (and everyone knew it).


New web projects should consider not using Java.

I've been using Java to do web development for over 10 years now. At first, it was a step in the right direction compared to the available alternatives. Now, there are better alternatives than Java.

This is really just a specific case of the magic hammer approach to problem solving, but it's one that's really painful.


Don't worry too much about what language to learn, use the industry heavy weights like c# or python. Languages like Ruby are fun in the bedroom, but don't do squat in workplace scenarios. Languages like c# and Java can handle small to the very large software projects. If anyone says otherwise, then your talking about a scripting language. Period!

Before starting a project, consider how much support and code samples are available on the net. Again, choosing a language like Ruby which has very few code samples on the web compared to Java for example, will only cause you grief further down the road when your stuck on a problem.

You can't post a message on a forum and expect an answer back while your boss is asking you how your coding is going. What are you going to say? "I'm waiting for someone to help me out on this forum"

Learn one language and learn it good. Learning multiple languages may carry over skills and practices, but you'll only even be OK at all of them. Be good at one. There are entire books dedicated to Threading in Java which, when you think about it, is only one namespace out of over 100.

Master one or be ok at lots.


I often get shouted down when I claim that the code is merely an expression of my design. I quite dislike the way I see so many developers design their system "on the fly" while coding it.

The amount of time and effort wasted when one of these cowboys falls off his horse is amazing - and 9 times out of 10 the problem they hit would have been uncovered with just a little upfront design work.

I feel that modern methodologies do not emphasize the importance of design in the overall software development process. Eg, the importance placed on code reviews when you haven't even reviewed your design! It's madness.


A Clever Programmer Is Dangerous

I have spent more time trying to fix code written by "clever" programmers. I'd rather have a good programmer than an exceptionally smart programmer who wants to prove how clever he is by writing code that only he (or she) can interpret.


You shouldn't settle on the first way you find to code something that "works."

I really don't think this should be controversial, but it is. People see an example from elsewhere in the code, from online, or from some old "Teach yourself Advanced Power SQLJava#BeansServer in 3.14159 minutes" book dated 1999, and they think they know something and they copy it into their code. They don't walk through the example to find out what each line does. They don't think about the design of their program and see if there might be a more organized or more natural way to do the same thing. They don't make any attempt at keeping their skill sets up to date to learn that they are using ideas and methods deprecated in the last year of the previous millenium. They don't seem to have the experience to learn that what they're copying has created specific horrific maintenance burdens for programmers for years and that they can be avoided with a little more thought.

In fact, they don't even seem to recognize that there might be more than one way to do something.

I come from the Perl world, where one of the slogans is "There's More Than One Way To Do It." (TMTOWTDI) People who've taken a cursory look at Perl have written it off as "write-only" or "unreadable," largely because they've looked at crappy code written by people with the mindset I described above. Those people have given zero thought to design, maintainability, organization, reduction of duplication in code, coupling, cohesion, encapsulation, etc. They write crap. Those people exist programming in every language, and easy to learn languages with many ways to do things give them plenty of rope and guns to shoot and hang themselves with. Simultaneously.

But if you hang around the Perl world for longer than a cursory look, and watch what the long-timers in the community are doing, you see a remarkable thing: the good Perl programmers spend some time seeking to find the best way to do something. When they're naming a new module, they ask around for suggestions and bounce their ideas off of people. They hand their code out to get looked at, critiqued, and modified. If they have to do something nasty, they encapsulate it in the smallest way possible in a module for use in a more organized way. Several implementations of the same idea might hang around for awhile, but they compete for mindshare and marketshare, and they compete by trying to do the best job, and a big part of that is by making themselves easily maintainable. Really good Perl programmers seem to think hard about what they are doing and looking for the best way to do things, rather than just grabbing the first idea that flits through their brain.

Today I program primarily in the Java world. I've seen some really good Java code, but I see a lot of junk as well, and I see more of the mindset I described at the beginning: people settle on the first ugly lump of code that seems to work, without understanding it, without thinking if there's a better way.

You will see both mindsets in every language. I'm not trying to impugn Java specifically. (Actually I really like it in some ways ... maybe that should be my real controversial opinion!) But I'm coming to believe that every programmer needs to spend a good couple of years with a TMTOWTDI-style language, because even though conventional wisdom has it that this leads to chaos and crappy code, it actually seems to produce people who understand that you need to think about the repercussions of what you are doing instead of trusting your language to have been designed to make you do the right thing with no effort.

I do think you can err too far in the other direction: i.e., perfectionism that totally ignores your true needs and goals (often the true needs and goals of your business, which is usually profitability). But I don't think anyone can be a truly great programmer without learning to invest some greater-than-average effort in thinking about finding the best (or at least one of the best) way to code what they are doing.


Most professional programmers suck

I have come across too many people doing this job for their living who were plain crappy at what they were doing. Crappy code, bad communication skills, no interest in new technology whatsoever. Too many, too many...


What strikes me as amusing about this question is that I've just read the first page of answers, and so far, I haven't found a single controversial opinion.

Perhaps that says more about the way stackoverflow generates consensus than anything else. Maybe I should have started at the bottom. :-)


I think that using regions in C# is totally acceptable to collapse your code while in VS. Too many people try to say it hides your code and makes it hard to find things. But if you use them properly they can be very helpful to identify sections of code.


I firmly believe that unmanaged code isn't worth the trouble. The extra maintainability expenses associated with hunting down memory leaks which even the best programmers introduce occasionally far outweigh the performance to be gained from a language like C++. If Java, C#, etc. can't get the performance you need, buy more machines.


Whenever you expose a mutable class to the outside world, you should provide events to make it possible to observe its mutation. The extra effort may also convince you to make it immutable after all.


That best practices are a hazard because they ask us to substitute slogans for thinking.


The use of hungarian notation should be punished with death.

That should be controversial enough ;)


It's not the tools, it's you

Whenever developers try to do something new like doing UML diagrams, charts of any sort, project management they first look for the perfect tool to solve the problem. After endless searches finding not the right tool their motivation starves. All that is left then is complaints about the lack of useable software. It is the insight that the plan to be organized died in absence of a piece of software.

Well, it is only yourself dealing with organization. If you are used to organize you can do it with or without the aid of a software (and most do without). If you are not used to organize nobody can help you.

So "not having the right software" is just the simplest excuse for not being organized at all.


Upfront design - don't just start writing code because you're excited to write code

I've seen SO many apps that are poorly designed because the developer was so excited to get coding that they just opened up a white page and started writing code. I understand that things change during the development lifecycle. However, it's difficult working with applications that have several different layouts and development methodologies from form to form, method to method.

It's difficult to hit the target your application is to handle if you haven't clearly defined the task and how you plan to code it. Take some time (and not just 5 minutes) and make sure you've laid out as much of it has you can before you start coding. This way you'll avoid a spaghetti mess that your replacement will have to support.


Print statements are a valid way to debug code

I believe it is perfectly fine to debug your code by littering it with System.out.println (or whatever print statement works for your language). Often, this can be quicker than debugging, and you can compare printed outputs against other runs of the app.

Just make sure to remove the print statements when you go to production (or better, turn them into logging statements)


Only write an abstraction if it's going to save 3X as much time later.

I see people write all these crazy abstractions sometimes and I think to myself, "Why?"

Unless an abstraction is really going to save you time later or it's going to save the person maintaining your code time, it seems people are just writing spaghetti code more and more.


That most language proponents make a lot of noise.


Programmers take their (own little limited stupid) programming language as a sacrosanct religion.

Its so funny how programmers take these discussions almost like religious believers do: no critics allowed, (often) no objective discussion, (very often) arguing based upon very limited or absent knowledge and information. For a confirmation, just read the previous answers, and especially the comments.

Also funny and another confirmation: by definition of the question "give me a controversial opinion", any controversion opinion should NOT qualify for negative votes - actually the opposite: the more controversial, the better. But how do our programmers react: like Pavlov's dogs, voting negative on disliked opinions.

PS: I upvoted some others for fairness.


A programming task is only fun while it's impossible, that is up til the point where you've convinced yourself you'll be able to solve it successfully.

This, I suppose, is why so many of my projects end up halfway finished in a folder called "to_be_continued".


If it's not native, it's not really programming

By definition, a program is an entity that is run by the computer. It talks directly to the CPU and the OS. Code that does not talk directly to the CPU and the OS, but is instead run by some other program that does talk directly to the CPU and the OS, is not a program; it's a script.

This was just simple common sense, completely non-controversial, back before Java came out. Suddenly there was a scripting language with a large enough feature set to accomplish tasks that had previously been exclusively the domain of programs. In response, Microsoft developed the .NET framework and some scripting languages to run on it, and managed to muddy the waters further by slowly reducing support for true programming among their development tools in favor of .NET scripting.

Even though it can accomplish a lot of things that you previously had to write programs for, managed code of any variety is still scripting, not programming, and "programs" written in it do and always will share the performance characteristics of scripts: they run more slowly and use up far more RAM than a real (native) program would take to accomplish the same task.

People calling it programming are doing everyone a disservice by dumbing down the definition. It leads to lower quality across the board. If you try and make programming so easy that any idiot can do it, what you end up with are a whole lot of idiots who think they can program.


Opinion: explicit variable declaration is a great thing.

I'll never understand the "wisdom" of letting the developer waste costly time tracking down runtime errors caused by variable name typos instead of simply letting the compiler/interpreter catch them.

Nobody's ever given me an explanation better than "well it saves time since I don't have to write 'int i;'." Uhhhhh... yeah, sure, but how much time does it take to track down a runtime error?


All variables/properties should be readonly/final by default.

The reasoning is a bit analogous to the sealed argument for classes, put forward by Jon. One entity in a program should have one job, and one job only. In particular, it makes absolutely no sense for most variables and properties to ever change value. There are basically two exceptions.

  1. Loop variables. But then, I argue that the variable actually doesn't change value at all. Rather, it goes out of scope at the end of the loop and is re-instantiated in the next turn. Therefore, immutability would work nicely with loop variables and everyone who tries to change a loop variable's value by hand should go straight to hell.

  2. Accumulators. For example, imagine the case of summing over the values in an array, or even a list/string that accumulates some information about something else.

    Today, there are better means to accomplish the same goal. Functional languages have higher-order functions, Python has list comprehension and .NET has LINQ. In all these cases, there is no need for a mutable accumulator / result holder.

    Consider the special case of string concatenation. In many environments (.NET, Java), strings are actually immutables. Why then allow an assignment to a string variable at all? Much better to use a builder class (i.e. a StringBuilder) all along.

I realize that most languages today just aren't built to acquiesce in my wish. In my opinion, all these languages are fundamentally flawed for this reason. They would lose nothing of their expressiveness, power, and ease of use if they would be changed to treat all variables as read-only by default and didn't allow any assignment to them after their initialization.


That most language proponents make a lot of noise.


2 space indent.

No discussion. It just has to be that way ;-)


New web projects should consider not using Java.

I've been using Java to do web development for over 10 years now. At first, it was a step in the right direction compared to the available alternatives. Now, there are better alternatives than Java.

This is really just a specific case of the magic hammer approach to problem solving, but it's one that's really painful.


The world needs more GOTOs

GOTOs are avoided religiously often with no reasoning beyond "my professor told me GOTOs are bad." They have a purpose and would greatly simplify production code in many places.

That said, they aren't really necessary in 99% of the code you'll ever write.


Source files are SO 20th century.

Within the body of a function/method, it makes sense to represent procedural logic as linear text. Even when the logic is not strictly linear, we have good programming constructs (loops, if statements, etc) that allow us to cleanly represent non-linear operations using linear text.

But there is no reason that I should be required to divide my classes among distinct files or sort my functions/methods/fields/properties/etc in a particular order within those files. Why can't we just throw all those things within a big database file and let the IDE take care of sorting everything dynamically? If I want to sort my members by name then I'll click the member header on the members table. If I want to sort them by accessibility then I'll click the accessibility header. If I want to view my classes as an inheritence tree, then I'll click the button to do that.

Perhaps classes and members could be viewed spatially, as if they were some sort of entities within a virtual world. If the programmer desired, the IDE could automatically position classes & members that use each other near each other so that they're easy to find. Imaging being able to zoom in and out of this virtual world. Zoom all the way out and you can namespace galaxies with little class planets in them. Zoom in to a namespace and you can see class planets with method continents and islands and inner classes as orbitting moons. Zoom in to a method, and you see... the source code for that method.

Basically, my point is that in modern languages it doesn't matter what file(s) you put your classes in or in what order you define a class's members, so why are we still forced to use these archaic practices? Remember when Gmail came out and Google said "search, don't sort"? Well, why can't the same philosophy be applied to programming languages?


MVC for the web should be far simpler than traditional MVC.

Traditional MVC involves code that "listens" for "events" so that the view can continually be updated to reflect the current state of the model. In the web paradigm however, the web server already does the listening, and the request is the event. Therefore MVC for the web need only be a specific instance of the mediator pattern: controllers mediating between views and the model. If a web framework is crafted properly, a re-usable core should probably not be more than 100 lines. That core need only implement the "page controller" paradigm but should be extensible so as to be able to support the "front controller" paradigm.

Below is a method that is the crux of my own framework, used successfully in an embedded consumer device manufactured by a Fortune 100 network hardware manufacturer, for a Fortune 50 media company. My approach has been likened to Smalltalk by a former Smalltalk programmer and author of an Oreilly book about the most prominent Java web framework ever; furthermore I have ported the same framework to mod_python/psp.

static function sendResponse(IBareBonesController $controller) {
  $controller->setMto($controller->applyInputToModel());
  $controller->mto->applyModelToView();
}

Using regexs to parse HTML is, in many cases, fine

Every time someone posts a question on Stack Overflow asking how to achieve some HTML manuipulation with a regex, the first answer is "Regex is a insufficient tool to parse HTML so don't do it". If the questioner was trying to build a web browser, this would be a helpful answer. However, usually the questioner wants to do some thing like add a rel tag to all the links to a certain domain, usually in a case when certain assumptions can be made about the style of the incoming markup, something that is entiely reasonable to do with a regex.


C (or C++) should be the first programming language

The first language should NOT be the easy one, it should be one that sets up the student's mind and prepare it for serious computer science.
C is perfect for that, it forces students to think about memory and all the low level stuff, and at the same time they can learn how to structure their code (it has functions!)

C++ has the added advantage that it really sucks :) thus the students will understand why people had to come up with Java and C#


Newer languages, and managed code do not make a bad programmer better.


Excessive HTML in PHP files: sometimes necessary

Excessive Javascript in PHP files: trigger the raptor attack

While I have a hard time figuring out all your switching between echoing and ?>< ?php 'ing html (after all, php is just a processor for html), lines and lines of javascript added in make it a completely unmaintainable mess.

People have to grasp this: They are two separate programming languages. Pick one to be your primary language. Then go on and find a quick, clean and easily maintainable way to make your primary include the secondary language.

The reason why you jump between PHP, Javascript and HTML all the time is because you are bad at all three of them.

Ok, maybe its not exactly controversial. I had the impression this was a general frustration venting topic :)


Best practices aren't.


A real programmer loves open-source like a soulmate and loves Microsoft as a dirty but satisfying prostitute


System.Data.DataSet Rocks!

Strongly-typed DataSets are better, in my opinion, than custom DDD objects for most business applications.

Reasoning: We're bending over backwards to figure out Unit of Work on custom objects, LINQ to SQL, Entity Framework and it's adding complexity. Use a nice code generator from somewhere to generate the data layer and the Unit of Work sits on the object collections (DataTable and DataSet)--no mystery.


Never make up your mind on an issue before thoroughly considering said issue. No programming standard EVER justifies approaching an issue in a poor manner. If the standard demands a class to be written, but after careful thought, you deem a static method to be more appropriate, always go with the static method. Your own discretion is always better than even the best forward thinking of whoever wrote the standard. Standards are great if you're working in a team, but rules are meant to be broken (in good taste, of course).


Generated documentation is nearly always totally worthless.

Or, as a corollary: Your API needs separate sets of documentation for maintainers and users.

There are really two classes of people who need to understand your API: maintainers, who must understand the minutiae of your implementation to be effective at their job, and users, who need a high-level overview, examples, and thorough details about the effects of each method they have access to.

I have never encountered generated documentation that succeeded in either area. Generally, when programmers write comments for tools to extract and make documentation out of, they aim for somewhere in the middle--just enough implementation detail to bore and confuse users yet not enough to significantly help maintainers, and not enough overview to be of any real assistance to users.

As a maintainer, I'd always rather have clean, clear comments, unmuddled by whatever strange markup your auto-doc tool requires, that tell me why you wrote that weird switch statement the way you did, or what bug this seemingly-redundant parameter check fixes, or whatever else I need to know to actually keep the code clean and bug-free as I work on it. I want this information right there in the code, adjacent to the code it's about, so I don't have to hunt down your website to find it in a state that lends itself to being read.

As a user, I'd always rather have a thorough, well-organized document (a set of web pages would be ideal, but I'd settle for a well-structured text file, too) telling me how your API is architectured, what methods do what, and how I can accomplish what I want to use your API to do. I don't want to see internally what classes you wrote to allow me to do work, or files they're in for that matter. And I certainly don't want to have to download your source so I can figure out exactly what's going on behind the curtain. If your documentation were good enough, I wouldn't have to.

That's how I see it, anyway.


Making software configurable is a bad idea.

Configurable software allows the end-user (or admin etc) to choose too many options, which may not all have been tested together (or rather, if there are more than a very small number, I can guarantee will not have been tested).

So I think software which has its configuration hard-coded (but not necessarily shunning constants etc) to JUST WORK is a good idea. Run with sensible defaults, and DO NOT ALLOW THEM TO BE CHANGED.

A good example of this is the number of configuration options on Google Chrome - however, this is probably still too many :)


I think that using regions in C# is totally acceptable to collapse your code while in VS. Too many people try to say it hides your code and makes it hard to find things. But if you use them properly they can be very helpful to identify sections of code.


coding is not typing

It takes time to write the code. Most of the time in the editor window, you are just looking at the code, not actually typing. Not as often, but quite frequently, you are deleting what you have written. Or moving from one place to another. Or renaming.

If you are banging away at the keyboard for a long time you are doing something wrong.

Corollary: Number of lines of code written per day is not a linear measurement of a programmers productivity, as programmer that writes 100 lines in a day is quite likely a better programmer then the one that writes 20, but one that writes 5000 is certainly a bad programmer


Code == Design

I'm no fan of sophisticated UML diagrams and endless code documentation. In a high level language, your code should be readable and understandable as is. Complex documentation and diagrams aren't really any more user friendly.


Here's an article on the topic of Code as Design.


All variables/properties should be readonly/final by default.

The reasoning is a bit analogous to the sealed argument for classes, put forward by Jon. One entity in a program should have one job, and one job only. In particular, it makes absolutely no sense for most variables and properties to ever change value. There are basically two exceptions.

  1. Loop variables. But then, I argue that the variable actually doesn't change value at all. Rather, it goes out of scope at the end of the loop and is re-instantiated in the next turn. Therefore, immutability would work nicely with loop variables and everyone who tries to change a loop variable's value by hand should go straight to hell.

  2. Accumulators. For example, imagine the case of summing over the values in an array, or even a list/string that accumulates some information about something else.

    Today, there are better means to accomplish the same goal. Functional languages have higher-order functions, Python has list comprehension and .NET has LINQ. In all these cases, there is no need for a mutable accumulator / result holder.

    Consider the special case of string concatenation. In many environments (.NET, Java), strings are actually immutables. Why then allow an assignment to a string variable at all? Much better to use a builder class (i.e. a StringBuilder) all along.

I realize that most languages today just aren't built to acquiesce in my wish. In my opinion, all these languages are fundamentally flawed for this reason. They would lose nothing of their expressiveness, power, and ease of use if they would be changed to treat all variables as read-only by default and didn't allow any assignment to them after their initialization.


One class per file

Who cares? I much prefer entire programs contained in one file rather than a million different files.


New web projects should consider not using Java.

I've been using Java to do web development for over 10 years now. At first, it was a step in the right direction compared to the available alternatives. Now, there are better alternatives than Java.

This is really just a specific case of the magic hammer approach to problem solving, but it's one that's really painful.


Separation of concerns is evil :)

Only separate concerns if you have good reason for it. Otherwise, don't separate them.

I have encountered too many occasions of separation only for the sake of separation. The second half of Dijkstra's statement "Minimal coupling, maximal cohesion" should not be forgotten. :)

Happy to discuss this further.


I've been burned for broadcasting these opinions in public before, but here goes:

Well-written code in dynamically typed languages follows static-typing conventions

Having used Python, PHP, Perl, and a few other dynamically typed languages, I find that well-written code in these languages follows static typing conventions, for example:

  • Its considered bad style to re-use a variable with different types (for example, its bad style to take a list variable and assign an int, then assign the variable a bool in the same method). Well-written code in dynamically typed languages doesn't mix types.

  • A type-error in a statically typed language is still a type-error in a dynamically typed language.

  • Functions are generally designed to operate on a single datatype at a time, so that a function which accepts a parameter of type T can only sensibly be used with objects of type T or subclasses of T.

  • Functions designed to operator on many different datatypes are written in a way that constrains parameters to a well-defined interface. In general terms, if two objects of types A and B perform a similar function, but aren't subclasses of one another, then they almost certainly implement the same interface.

While dynamically typed languages certainly provide more than one way to crack a nut, most well-written, idiomatic code in these languages pays close attention to types just as rigorously as code written in statically typed languages.

Dynamic typing does not reduce the amount of code programmers need to write

When I point out how peculiar it is that so many static-typing conventions cross over into dynamic typing world, I usually add "so why use dynamically typed languages to begin with?". The immediate response is something along the lines of being able to write more terse, expressive code, because dynamic typing allows programmers to omit type annotations and explicitly defined interfaces. However, I think the most popular statically typed languages, such as C#, Java, and Delphi, are bulky by design, not as a result of their type systems.

I like to use languages with a real type system like OCaml, which is not only statically typed, but its type inference and structural typing allow programmers to omit most type annotations and interface definitions.

The existence of the ML family of languages demostrates that we can enjoy the benefits of static typing with all the brevity of writing in a dynamically typed language. I actually use OCaml's REPL for ad hoc, throwaway scripts in exactly the same way everyone else uses Perl or Python as a scripting language.


Although I'm in full favor of Test-Driven Development (TDD), I think there's a vital step before developers even start the full development cycle of prototyping a solution to the problem.

We too often get caught up trying to follow our TDD practices for a solution that may be misdirected because we don't know the domain well enough. Simple prototypes can often elucidate these problems.

Prototypes are great because you can quickly churn through and throw away more code than when you're writing tests first (sometimes). You can then begin the development process with a blank slate but a better understanding.


Assembler is not dead

In my job (copy protection systems) assembler programming is essential, I was working with many hll copy protection systems and only assembler gives you the real power to utilize all the possibilities hidden in the code (like code mutation, low level stuff).

Also many code optimizations are possible only with an assembler programming, look at the sources of any video codecs, sources are written in assembler and optimized to use MMX/SSE/SSE2 opcodes whatever, many game engines uses assembler optimized routines, even Windows kernel has SSE optimized routines:

NTDLL.RtlMoveMemory

.text:7C902CD8                 push    ebp
.text:7C902CD9                 mov     ebp, esp
.text:7C902CDB                 push    esi
.text:7C902CDC                 push    edi
.text:7C902CDD                 push    ebx
.text:7C902CDE                 mov     esi, [ebp+0Ch]
.text:7C902CE1                 mov     edi, [ebp+8]
.text:7C902CE4                 mov     ecx, [ebp+10h]
.text:7C902CE7                 mov     eax, [esi]
.text:7C902CE9                 cld
.text:7C902CEA                 mov     edx, ecx
.text:7C902CEC                 and     ecx, 3Fh
.text:7C902CEF                 shr     edx, 6
.text:7C902CF2                 jz      loc_7C902EF2
.text:7C902CF8                 dec     edx
.text:7C902CF9                 jz      loc_7C902E77
.text:7C902CFF                 prefetchnta byte ptr [esi-80h]
.text:7C902D03                 dec     edx
.text:7C902D04                 jz      loc_7C902E03
.text:7C902D0A                 prefetchnta byte ptr [esi-40h]
.text:7C902D0E                 dec     edx
.text:7C902D0F                 jz      short loc_7C902D8F
.text:7C902D11
.text:7C902D11 loc_7C902D11:                           ; CODE XREF: .text:7C902D8Dj
.text:7C902D11                 prefetchnta byte ptr [esi+100h]
.text:7C902D18                 mov     eax, [esi]
.text:7C902D1A                 mov     ebx, [esi+4]
.text:7C902D1D                 movnti  [edi], eax
.text:7C902D20                 movnti  [edi+4], ebx
.text:7C902D24                 mov     eax, [esi+8]
.text:7C902D27                 mov     ebx, [esi+0Ch]
.text:7C902D2A                 movnti  [edi+8], eax
.text:7C902D2E                 movnti  [edi+0Ch], ebx
.text:7C902D32                 mov     eax, [esi+10h]
.text:7C902D35                 mov     ebx, [esi+14h]
.text:7C902D38                 movnti  [edi+10h], eax

So if you hear next time that assembler is dead, think about the last movie you have watched or the game you've played (and its copy protection heh).


A Clever Programmer Is Dangerous

I have spent more time trying to fix code written by "clever" programmers. I'd rather have a good programmer than an exceptionally smart programmer who wants to prove how clever he is by writing code that only he (or she) can interpret.


Development teams should be segregated more often by technological/architectural layers instead of business function.

I come from a general culture where developers own "everything from web page to stored procedure". So in order to implement a feature in the system/application, they would prepare the database table schemas, write the stored procs, match the data access code, implement the business logic and web service methods, and the web page interfaces.

And guess what? Everybody has their own way to doing things! Everyone struggles to learn the ASP.NET AJAX and Telerik or Infragistic suites, Enterprise Library or other productivity and data layer and persistence frameworks, Aspect-oriented frameworks, logging and caching application blocks, DB2 or Oracle percularities. And guess what? Everybody takes heck of a long time to learn how to do things the proper way! Meaning, lots of mistakes in the meantime and plenty of resulting defects and performance bottlenecks! And heck of a longer time to fix them! Across each and every layer! Everybody has a hand in every Visual Studio project. Nobody is specialised to handle and optmise one problem/technology domain. Too many chefs spoil the soup. All the chefs result in some radioactive goo.

Developers may have cross-layer/domain responsibilities, but they should not pretend that they can be masters of all disciplines, and should be limited to only a few. In my experience, when a project is not a small one and utilises lots of technologies, covering more business functions in a single layer is more productive (as well as encouraging more test code test that layer) than covering less business functions spanning the entire architectural stack (which motivates developers to test only via their UI and not test code).


Two brains think better than one

I firmly believe that pair programming is the number one factor when it comes to increasing code quality and programming productivity. Unfortunatly it is also a highly controversial for management who believes that "more hands => more code => $$$!"


My controversial opinion? Java doesn't suck but Java API's do. Why do java libraries insist on making it hard to do simple tasks? And why, instead of fixing the APIs, do they create frameworks to help manage the boilerplate code? This opinion can apply to any language that requires 10 or more lines of code to read a line from a file.


Avoid indentation.

Use early returns, continues or breaks.

instead of:

if (passed != NULL)
{
   for(x in list)
   {
      if (peter)
      {
          print "peter";
          more code.
          ..
          ..
      }
      else
      {
          print "no peter?!"
      }
   }
}

do:

if (pPassed==NULL)
    return false;

for(x in list)
{
   if (!peter)
   {
       print "no peter?!"
       continue;
   }

   print "peter";
   more code.
   ..
   ..
}

Here's one which has seemed obvious to me for many years but is anathema to everyone else: it is almost always a mistake to switch off C (or C++) assertions with NDEBUG in 'release' builds. (The sole exceptions are where the time or space penalty is unacceptable).

Rationale: If an assertion fails, your program has entered a state which

  • has never been tested
  • the developer was unable to code a recovery strategy for
  • the developer has effectively documented as being inconceivable.

Yet somehow 'industry best practice' is that the thing should just muddle on and hope for the best when it comes to live runs with your customers' data.


Web applications suck

My Internet connection is veeery slow. My experience with almost every Web site that is not Google is, at least, frustrating. Why doesn't anybody write desktop apps anymore? Oh, I see. Nobody wants to be bothered with learning how operating systems work. At least, not Windows. The last time you had to handle WM_PAINT, your head exploded. Creating a worker thread to perform a long task (I mean, doing it the Windows way) was totally beyond you. What the hell was a callback? Oh, my God!


Garbage collection sucks

No, it actually doesn't. But it makes the programmers suck like nothing else. In college, the first language they taught us was Visual Basic (the original one). After that, there was another course where the teachers pretended they taught us C++. But the damage was done. Nobody actually knew how to use this esoteric keyword delete did. After testing our programs, we either got invalid address exceptions or memory leaks. Sometimes, we got both. Among the 1% of my faculty who can actually program, only one who can manage his memory by himself (at least, he pretends) and he's writing this rant. The rest write their programs in VB.NET, which, by definition, is a bad language.


Dynamic typing suck

Unless you're using assembler, of course (that's the kind of dynamic typing that actually deserves praise). What I meant is the overhead imposed by dynamic, interpreted languages makes them suck. And don't come with that silly argument that different tools are good for different jobs. C is the right language for almost everything (it's fast, powerful and portable), and, when it isn't (it's not fast enough), there's always inline assembly.


I might come up with more rants, but that will be later, not now.


Programmers who spend all day answering questions on Stackoverflow are probably not doing the work they are being paid to do.


Globals and/or Singletons are not inherently evil

I come from more of a sysadmin, shell, Perl (and my "real" programming), PHP type background; last year I was thrown into a Java development gig.

Singletons are evil. Globals are so evil they are not even allowed. Yet, Java has things like AOP, and now various "Dependency Injection" frameworks (we used Google Guice). AOP less so, but DI things for sure give you what? Globals. Uhh, thanks.


Debuggers should be forbidden. This would force people to write code that is testable through unit tests, and in the end would lead to much better code quality.

Remove Copy & Paste from ALL programming IDEs. Copy & pasted code is very bad, this option should be completely removed. Then the programmer will hopefully be too lazy to retype all the code so he makes a function and reuses the code.

Whenever you use a Singleton, slap yourself. Singletons are almost never necessary, and are most of the time just a fancy name for a global variable.


You must know how to type to be a programmer.

It's controversial among people who don't know how to type, but who insist that they can two-finger hunt-and-peck as fast as any typist, or that they don't really need to spend that much time typing, or that Intellisense relieves the need to type...

I've never met anyone who does know how to type, but insists that it doesn't make a difference.

See also: Programming's Dirtiest Little Secret


Hardcoding is good!

Really ,more efficient and much easier to maintain in many cases!

The number of times I've seen constants put into parameter files really how often will you change the freezing point of water or the speed of light?

For C programs just hard code these type of values into a header file, for java into a static class etc.

When these parameters have a drastic effect on your programs behaviour you really want to do a regresion test on every change, this seems more natural with hard coded values. When things are stored in parameter/property files the temptation is to think "this is not a program cahnge so I dont need to test it".

The other advantage is it stops people messing with vital values in the parameter/property files because there aren't any!


You must know C to be able to call yoursel a programmer!


How about this one:

Garbage collectors actually hurt programmers' productivity and make resource leaks harder to find and fix

Note that I am talking about resouces in general, and not only memory.


The C++ STL library is so general purpose that it is optimal for no one.


1. You should not follow web standards - all the time.

2. You don't need to comment your code.

As long as it's understandable by a stranger.


I am of the opinion that there are too many people making programming decisions that shouldn't be worried about implementation.


The only "best practice" you should be using all the time is "Use Your Brain".

Too many people jumping on too many bandwagons and trying to force methods, patterns, frameworks etc onto things that don't warrant them. Just because something is new, or because someone respected has an opinion, doesn't mean it fits all :)

EDIT: Just to clarify - I don't think people should ignore best practices, valued opinions etc. Just that people shouldn't just blindly jump on something without thinking about WHY this "thing" is so great, IS it applicable to what I'm doing, and WHAT benefits/drawbacks does it bring?


Design patterns are a waste of time when it comes to software design and development.

Don't get me wrong, design patterns are useful but mainly as a communication vector. They can express complex ideas very concisely: factory, singleton, iterator...

But they shouldn't serve as a development method. Too often developers architect their code using a flurry of design pattern-based classes where a more concise design would be better, both in term of readability and performance. All that with the illusion that individual classes could be reused outside their domain. If a class is not designed for reuse or isn't part of the interface, then it's an implementation detail.

Design patterns should be used to put names on organizational features, not to dictate the way code must be written.

(It was supposed to be controversial, remember?)


The only "best practice" you should be using all the time is "Use Your Brain".

Too many people jumping on too many bandwagons and trying to force methods, patterns, frameworks etc onto things that don't warrant them. Just because something is new, or because someone respected has an opinion, doesn't mean it fits all :)

EDIT: Just to clarify - I don't think people should ignore best practices, valued opinions etc. Just that people shouldn't just blindly jump on something without thinking about WHY this "thing" is so great, IS it applicable to what I'm doing, and WHAT benefits/drawbacks does it bring?


A degree in Computer Science or other IT area DOES make you a more well rounded programmer

I don't care how many years of experience you have, how many blogs you've read, how many open source projects you're involved in. A qualification (I'd recommend longer than 3 years) exposes you to a different way of thinking and gives you a great foundation.

Just because you've written some better code than a guy with a BSc in Computer Science, does not mean you are better than him. What you have he can pick up in an instant which is not the case the other way around.

Having a qualification shows your commitment, the fact that you would go above and beyond experience to make you a better developer. Developers which are good at what they do AND have a qualification can be very intimidating.

I would not be surprized if this answer gets voted down.

Also, once you have a qualification, you slowly stop comparing yourself to those with qualifications (my experience). You realize that it all doesn't matter at the end, as long as you can work well together.

Always act mercifully towards other developers, irrespective of qualifications.


There are far too many programmers who write far too much code.


Notepad is a perfectly fine text editor. (And sometimes wordpad for non-windows line breaks)

  • Edit config files
  • View log files
  • Development

I know people who actually believe this! They will however use an IDE for development, but continue to use Notepad for everything else!


Your job is to put yourself out of work.

When you're writing software for your employer, any software that you create is to be written in such a way that it can be picked up by any developer and understood with a minimal amount of effort. It is well designed, clearly and consistently written, formatted cleanly, documented where it needs to be, builds daily as expected, checked into the repository, and appropriately versioned.

If you get hit by a bus, laid off, fired, or walk off the job, your employer should be able to replace you on a moment's notice, and the next guy could step into your role, pick up your code and be up and running within a week tops. If he or she can't do that, then you've failed miserably.

Interestingly, I've found that having that goal has made me more valuable to my employers. The more I strive to be disposable, the more valuable I become to them.


"Java Sucks" - yeah, I know that opinion is definitely not held by all :)

I have that opinion because the majority of Java applications I've seen are memory hogs, run slowly, horrible user interface and so on.

G-Man


Associative Arrays / Hash Maps / Hash Tables (+whatever its called in your favourite language) are the best thing since sliced bread!

Sure, they provide fast lookup from key to value. But they also make it easy to construct structured data on the fly. In scripting languages its often the only (or at least most used) way to represent structured data.

IMHO they were a very important factor for the success of many scripting languages.

And even in C++ std::map and std::tr1::unordered_map helped me writing code faster.


Lower camelCase is stupid and unsemantic

Using lower camelCase makes the name/identifier ("name" used from this point) look like a two-part thing. Upper CamelCase however, gives the clear indication that all the words belong together.

Hungarian notation is different ... because the first part of the name is a type indicator, and so it has a separate meaning from the rest of the name.

Some might argue that lower camelCase should be used for functions/procedures, especially inside classes. This is popular in Java and object oriented PHP. However, there is no reason to do that to indicate that they are class methods, because BY THE WAY THEY ARE ACCESSED it becomes more than clear that these are just that.

Some code examples:

# Java
myobj.objMethod() 
# doesn't the dot and parens indicate that objMethod is a method of myobj?

# PHP
$myobj->objMethod() 
# doesn't the pointer and parens indicate that objMethod is a method of myobj?

Upper CamelCase is useful for class names, and other static names. All non-static content should be recognised by the way they are accessed, not by their name format(!)

Here's my homogenous code example, where name behaviours are indicated by other things than their names... (also, I prefer underscore to separate words in names).

# Java
my_obj = new MyObj() # Clearly a class, since it's upper CamelCase
my_obj.obj_method() # Clearly a method, since it's executed
my_obj.obj_var # Clearly an attribute, since it's referenced

# PHP
$my_obj = new MyObj()
$my_obj->obj_method()
$my_obj->obj_var
MyObj::MyStaticMethod()

# Python
MyObj = MyClass # copies the reference of the class to a new name
my_obj = MyObj() # Clearly a class, being instantiated
my_obj.obj_method() # Clearly a method, since it's executed
my_obj.obj_var # clearly an attribute, since it's referenced
my_obj.obj_method # Also, an attribute, but holding the instance method.
my_method = myobj.obj_method # Instance method
my_method() # Same as myobj.obj_method()
MyClassMethod = MyObj.obj_method # Attribute holding the class method
MyClassMethod(myobj) # Same as myobj.obj_method()
MyClassMethod(MyObj) # Same as calling MyObj.obj_method() as a static classmethod

So there goes, my completely obsubjective opinion on camelCase.


  • Soon we are going to program in a world without databases.

  • AOP and dependency injection are the GOTO of the 21st century.

  • Building software is a social activity, not a technical one.

  • Joel has a blog.


Believe it or not, my belief that, in an OO language, most of the (business logic) code that operates on a class's data should be in the class itself is heresy on my team.


Goto is OK! (is that controversial enough)
Sometimes... so give us the choice! For example, BASH doesn't have goto. Maybe there is some internal reason for this but still.
Also, goto is the building block of Assembly language. No if statements for you! :)


Size matters! Embellish your code so it looks bigger.


Software sucks due to a lack of diversity. No offense to any race but things work pretty when a profession is made up of different races and both genders. Just look at overusing non-renewable energy. It is going great because everyone is contributing, not just the "stereotypical guy"


Whenever you expose a mutable class to the outside world, you should provide events to make it possible to observe its mutation. The extra effort may also convince you to make it immutable after all.


Your job is to put yourself out of work.

When you're writing software for your employer, any software that you create is to be written in such a way that it can be picked up by any developer and understood with a minimal amount of effort. It is well designed, clearly and consistently written, formatted cleanly, documented where it needs to be, builds daily as expected, checked into the repository, and appropriately versioned.

If you get hit by a bus, laid off, fired, or walk off the job, your employer should be able to replace you on a moment's notice, and the next guy could step into your role, pick up your code and be up and running within a week tops. If he or she can't do that, then you've failed miserably.

Interestingly, I've found that having that goal has made me more valuable to my employers. The more I strive to be disposable, the more valuable I become to them.


I'm always right.

Or call it design by discussion. But if I propose something, you'd had better be able to demonstrate why I'm wrong, and propose an alternative that you can defend.

Of course, this only works if I'm reasonable. Luckily for you, I am. :)


Correct every defect when it's discovered. Not just "severity 1" defects; all defects.

Establish a deployment mechanism that makes application updates immediately available to users, but allows them to choose when to accept these updates. Establish a direct communication mechanism with users that enables them to report defects, relate their experience with updates, and suggest improvements.

With aggressive testing, many defects can be discovered during the iteration in which they are created; immediately correcting them reduces developer interrupts, a significant contributor to defect creation. Immediately correcting defects reported by users forges a constructive community, replacing product quality with product improvement as the main topic of conversation. Implementing user-suggested improvements that are consistent with your vision and strategy produces community of enthusiastic evangelists.


Never let best practices or pattern obsessesion slave you.

These should be guidelines, not laws set in stone.

And I really like the patterns, and the GoF book more or less says it that way too, stuff to browse through, providing a common jargon. Not a ball and chain gospel.


This one is not exactly on programming, because html/css are not programming languages.

Tables are ok for layout

css and divs can't do everything, save yourself the hassle and use a simple table, then use css on top of it.


Controversial to self, because some things are better be left unsaid, so you won't be painted by others as too egotist. However, here it is:

If it is to be, it begins with me


Opinion: There should not be any compiler warnings, only errors. Or, formulated differently You should always compile your code with -Werror.

Reason: Either the compiler thinks it is something which should be corrected, in case it should be an error, or it is not necessary to fix, in which case the compiler should just shut up.


Believe it or not, my belief that, in an OO language, most of the (business logic) code that operates on a class's data should be in the class itself is heresy on my team.


Most developers don't have a clue

Yup .. there you go. I've said it. I find that from all the developers that I personally know .. just a handful are actually good. Just a handful understand that code should be tested ... that the Object Oriented approach to developing is actually there to help you. It frustrates me to no end that there are people who get the title of developer while in fact all they can do is copy and paste a bit of source code and then execute it.

Anyway ... I'm glad initiatives like stackoverflow are being started. It's good for developers to wonder. Is there a better way? Am I doing it correctly? Perhaps I could use this technique to speed things up, etc ...

But nope ... the majority of developers just learn a language that they are required by their job and stick with it until they themselves become old and grumpy developers that have no clue what's going on. All they'll get is a big paycheck since they are simply older than you.

Ah well ... life is unjust in the IT community and I'll be taking steps to ignore such people in the future. Hooray!


Code Generation is bad

I hate languages that require you to make use of code generation (or copy&paste) for simple things, like JavaBeans with all their Getters and Setters.

C#'s AutoProperties are a step in the right direction, but for nice DTOs with Fields, Properties and Constructor parameters you still need a lot of redundancy.


Open Source software costs more in the long run

For regular Line of Business companies, Open Source looks free but has hidden costs.

When you take into account inconsistency of quality, variable usability and UI/UX, difficulties of interoperability and standards, increased configuration, associated increased need for training and support, the Total Cost of Ownership for Open Source is much higher than commercial offerings.

Tech-savvy programmer-types take the liberation of Open Source and run with it; they 'get it' and can adopt it and customise it to suit their purposes. On the other hand, businesses that are primarily non-technical, but need software to run their offices, networks and websites are running the risk of a world of pain for themselves and heavy costs in terms of lost time, productivity and (eventually) support fees and/or the cost of abandoning the experiement all together.


I don't believe that any question related to optimization should be flooded with a chant of the misquoted "Premature optimization is the root of all evil"s because code that is optimized into obfuscation is what makes coding fun


Explicit self in Python's method declarations is poor design choice.

Method calls got syntactic sugar, but declarations didn't. It's a leaky abstraction (by design!) that causes annoying errors, including runtime errors with apparent off-by-one error in reported number of arguments.


Emacs is better


Hardcoding is good!

Really ,more efficient and much easier to maintain in many cases!

The number of times I've seen constants put into parameter files really how often will you change the freezing point of water or the speed of light?

For C programs just hard code these type of values into a header file, for java into a static class etc.

When these parameters have a drastic effect on your programs behaviour you really want to do a regresion test on every change, this seems more natural with hard coded values. When things are stored in parameter/property files the temptation is to think "this is not a program cahnge so I dont need to test it".

The other advantage is it stops people messing with vital values in the parameter/property files because there aren't any!


Social skills matter more than technical skills

Agreable but average programmers with good social skills will have a more successful carreer than outstanding programmers who are disagreable people.


C++ is a good language

I practically got lynched in another question a week or two back for saying that C++ wasn't a very nice language. So now I'll try saying the opposite. ;)

No, seriously, the point I tried to make then, and will try again now, is that C++ has plenty of flaws. It's hard to deny that. It's so extremely complicated that learning it well is practically something you can dedicate your entire life to. It makes many common tasks needlessly hard, allows the user to plunge head-first into a sea of undefined behavior and unportable code, with no warnings given by the compiler.

But it's not the useless, decrepit, obsolete, hated language that many people try to make it. It shouldn't be swept under the carpet and ignored. The world wouldn't be a better place without it. It has some unique strengths that, unfortunately, are hidden behind quirky syntax, legacy cruft and not least, bad C++ teachers. But they're there.

C++ has many features that I desperately miss when programming in C# or other "modern" languages. There's a lot in it that C# and other modern languages could learn from.

It's not blindly focused on OOP, but has instead explored and pioneered generic programming. It allows surprisingly expressive compile-time metaprogramming producing extremely efficient, robust and clean code. It took in lessons from functional programming almost a decade before C# got LINQ or lambda expressions.

It allows you to catch a surprising number of errors at compile-time through static assertions and other metaprogramming tricks, which eases debugging vastly, and even beats unit tests in some ways. (I'd much rather catch an error at compile-time than afterwards, when I'm running my tests).

Deterministic destruction of variables allows RAII, an extremely powerful little trick that makes try/finally blocks and C#'s using blocks redundant.

And while some people accuse it of being "design by committee", I'd say yes, it is, and that's actually not a bad thing in this case. Look at Java's class library. How many classes have been deprecated again? How many should not be used? How many duplicate each others' functionality? How many are badly designed?

C++'s standard library is much smaller, but on the whole, it's remarkably well designed, and except for one or two minor warts (vector<bool>, for example), its design still holds up very well. When a feature is added to C++ or its standard library, it is subjected to heavy scrutiny. Couldn't Java have benefited from the same? .NET too, although it's younger and was somewhat better designed to begin with, is still accumulating a good handful of classes that are out of sync with reality, or were badly designed to begin with.

C++ has plenty of strengths that no other language can match. It's a good language


There are only 2 kinds of people who use C (/C++): Those who don't know any other language, and those who are too lazy to learn a new one.


Relational databases are awful for web applications.

For example:

  • threaded comments
  • tag clouds
  • user search
  • maintaining record view counts
  • providing undo / revision tracking
  • multi-step wizards

A random collection of Cook's aphorisms...

  • The hardest language to learn is your second.

  • The hardest OS to learn is your second one - especially if your first was an IBM mainframe.

  • Once you've learned several seemingly different languages, you finally realize that all programming languages are the same - just minor differences in syntax.

  • Although one can be quite productive and marketable without having learned any assembly, no one will ever have a visceral understanding of computing without it.

  • Debuggers are the final refuge for programmers who don't really know what they're doing in the first place.

  • No OS will ever be stable if it doesn't make use of hardware memory management.

  • Low level systems programming is much, much easier than applications programming.

  • The programmer who has a favorite language is just playing.

  • Write the User's Guide FIRST!

  • Policy and procedure are intended for those who lack the initiative to perform otherwise.

  • (The Contractor's Creed): Tell'em what they need. Give'em what they want. Make sure the check clears.

  • If you don't find programming fun, get out of it or accept that although you may make a living at it, you'll never be more than average.

  • Just as the old farts have to learn the .NET method names, you'll have to learn the library calls. But there's nothing new there.
    The life of a programmer is one of constantly adapting to different environments, and the more tools you have hung on your belt, the more versatile and marketable you'll be.

  • You may piddle around a bit with little code chunks near the beginning to try out some ideas, but, in general, one doesn't start coding in earnest until you KNOW how the whole program or app is going to be layed out, and you KNOW that the whole thing is going to work EXACTLY as advertised. For most projects with at least some degree of complexity, I generally end up spending 60 to 70 percent of the time up front just percolating ideas.

  • Understand that programming has little to do with language and everything to do with algorithm. All of those nifty geegaws with memorable acronyms that folks have come up with over the years are just different ways of skinning the implementation cat. When you strip away all the OOPiness, RADology, Development Methodology 37, and Best Practice 42, you still have to deal with the basic building blocks of:

    • assignments
    • conditionals
    • iterations
    • control flow
    • I/O

Once you can truly wrap yourself around that, you'll eventually get to the point where you see (from a programming standpoint) little difference between writing an inventory app for an auto parts company, a graphical real-time TCP performance analyzer, a mathematical model of a stellar core, or an appointments calendar.

  • Beginning programmers work with small chunks of code. As they gain experience, they work with ever increasingly large chunks of code.
    As they gain even more experience, they work with small chunks of code.

Null references should be removed from OO languages

Coming from a Java and C# background, where its normal to return null from a method to indicate a failure, I've come to conclude that nulls cause a lot of avoidable problems. Language designers can remove a whole class of errors relate to NullRefernceExceptions if they simply eliminate null references from code.

Additionally, when I call a method, I have no way of knowing whether that method can return null references unless I actually dig in the implementation. I'd like to see more languages follow F#'s model for handling nulls: F# doesn't allow programmers to return null references (at least for classes compiled in F#), instead it requires programmers to represent empty objects using option types. The nice thing about this design is how useful information, such as whether a function can return null references, is propagated through the type system: functions which return a type 'a have a different return type than functions which return 'a option.


Coding is an Art

Some people think coding is an art, and others think coding is a science.

The "science" faction argues that as the target is to obtain the optimal code for a situation, then coding is the science of studying how to obtain this optimal.

The "art" faction argues there are many ways to obtain the optimal code for a situation, the process is full of subjectivity, and that to choose wisely based on your own skills and experience is an art.


Size matters! Embellish your code so it looks bigger.


Bad Programmers are Language-Agnostic

A really bad programmer can write bad code in almost any language.


Design Patterns are a symptom of Stone Age programming language design

They have their purpose. A lot of good software gets built with them. But the fact that there was a need to codify these "recipes" for psychological abstractions about how your code works/should work speaks to a lack of programming languages expressive enough to handle this abstraction for us.

The remedy, I think, lies in languages that allow you to embed more and more of the design into the code, by defining language constructs that might not exist or might not have general applicability but really really make sense in situations your code deals with incessantly. The Scheme people have known this for years, and there are things possible with Scheme macros that would make most monkeys-for-hire piss their pants.


If a developer cannot write clear, concise and grammatically correct comments then they should have to go back and take English 101.

We have developers and (the horror) architects who cannot write coherently. When their documents are reviewed they say things like "oh, don't worry about grammatical errors or spelling - that's not important". Then they wonder why their convoluted garbage documents become convoluted buggy code.

I tell the interns that I mentor that if you can't communicate your great ideas verbally or in writing you may as well not have them.


Software engineers should not work with computer science guys

Their differences : SEs care about code reusability, while CSs just suss out code SEs care about performance, while CSs just want to have things done now SEs care about whole structure, while CSs do not give a toss ...


Null references should be removed from OO languages

Coming from a Java and C# background, where its normal to return null from a method to indicate a failure, I've come to conclude that nulls cause a lot of avoidable problems. Language designers can remove a whole class of errors relate to NullRefernceExceptions if they simply eliminate null references from code.

Additionally, when I call a method, I have no way of knowing whether that method can return null references unless I actually dig in the implementation. I'd like to see more languages follow F#'s model for handling nulls: F# doesn't allow programmers to return null references (at least for classes compiled in F#), instead it requires programmers to represent empty objects using option types. The nice thing about this design is how useful information, such as whether a function can return null references, is propagated through the type system: functions which return a type 'a have a different return type than functions which return 'a option.


In my workplace, I've been trying to introduce more Agile/XP development habits. Continuous Design is the one I've felt most resistance on so far. Maybe I shouldn't have phrased it as "let's round up all of the architecture team and shoot them"... ;)


Only write an abstraction if it's going to save 3X as much time later.

I see people write all these crazy abstractions sometimes and I think to myself, "Why?"

Unless an abstraction is really going to save you time later or it's going to save the person maintaining your code time, it seems people are just writing spaghetti code more and more.


Code layout does matter

Maybe specifics of brace position should remain purely religious arguments - but it doesn't mean that all layout styles are equal, or that there are no objective factors at all!

The trouble is that the uber-rule for layout, namely: "be consistent", sound as it is, is used as a crutch by many to never try to see if their default style can be improved on - and that, furthermore, it doesn't even matter.

A few years ago I was studying Speed Reading techniques, and some of the things I learned about how the eye takes in information in "fixations", can most optimally scan pages, and the role of subconsciously picking up context, got me thinking about how this applied to code - and writing code with it in mind especially.

It led me to a style that tended to be columnar in nature, with identifiers logically grouped and aligned where possible (in particular I became strict about having each method argument on its own line). However, rather than long columns of unchanging structure it's actually beneficial to vary the structure in blocks so that you end up with rectangular islands that the eye can take in in a single fixture - even if you don't consciously read every character.

The net result is that, once you get used to it (which typically takes 1-3 days) it becomes pleasing to the eye, easier and faster to comprehend, and is less taxing on the eyes and brain because it's laid out in a way that makes it easier to take in.

Almost without exception, everyone I have asked to try this style (including myself) initially said, "ugh I hate it!", but after a day or two said, "I love it - I'm finding it hard not to go back and rewrite all my old stuff this way!".

I've been hoping to find the time to do more controlled experiments to collect together enough evidence to write a paper on, but as ever have been too busy with other things. However this seemed like a good opportunity to mention it to people interested in controversial techniques :-)

[Edit]

I finally got around to blogging about this (after many years parked in the "meaning to" phase): Part one, Part two, Part three.


Variable_Names_With_Bloody_Underscores

or even worse

CAPITALIZED_VARIABLE_NAMES_WITH_BLOODY_UNDERSCORES

should be globally expunged... with prejudice! CamelCapsAreJustFine. (Glolbal constants not withstanding)

GOTO statements are for use by developers under the age of 11

Any language that does not support pointers is not worthy of the name

.Net = .Bloat The finest example of microsoft's efforts for web site development (Expressionless Web 2) is the finest example of slow bloated cr@pw@re ever written. (try Web Studio instead)

Response: OK well let me address the Underscore issue a little. From the C link you provided:

-Global constants should be all caps with '_' separators. This I actually agree with because it is so BLOODY_OBVIOUS

-Take for example NetworkABCKey. Notice how the C from ABC and K from key are confused. Some people don't mind this and others just hate it so you'll find different policies in different code so you never know what to call something.

I fall into the former category. I choose names VERY carefully and if you cannot figure out in one glance that the K belongs to Key then english is probably not your first language.

  • C Function Names

    • In a C++ project there should be very few C functions.
    • For C functions use the GNU convention of all lower case letters with '_' as the word delimiter.

Justification

* It makes C functions very different from any C++ related names. 

Example

int some_bloody_function() { }

These "standards" and conventions are simply the arbitrary decisions handed down through time. I think that while they make a certain amount of logical sense, They clutter up code and make something that should be short and sweet to read, clumsy, long winded and cluttered.

C has been adopted as the de-facto standard, not because it is friendly, but because it is pervasive. I can write 100 lines of C code in 20 with a syntactically friendly high level language.

This makes the program flow easy to read, and as we all know, revisiting code after a year or more means following the breadcrumb trail all over the place.

I do use underscores but for global variables only as they are few and far between and they stick out clearly. Other than that, a well thought out CamelCaps() function/ variable name has yet to let me down!


Design patterns are bad.

Actually, design patterns aren't.

You can write bad code, and bury it under a pile of patterns. Use singletons as global variables, and states as goto's. Whatever.

A design pattern is a standard solution for a particular problem, but requires you to understand the problem first. If you don't, design patterns become a part of the problem for the next developer.


Intranet Frameworks like SharePoint makes me think the whole corporate world is one giant ostrich with its head in the sand

I'm not only talking about MOSS here, I've worked with some other CORPORATE INTRANET products, and absolutely not one of them are great, but SharePoint (MOSS) is by far the worst.

  • Most of these systems don't easily bridge the gap between Intranet and Internet. So as a remote worker you're forced to VPN in. External customers just don't have the luxury of getting hold of your internal information first hand. Sure this can be fixed at a price $$$.
  • The search capabilities are always pathetic. Lots of time other departments simply don't know about information is out there.
  • Information fragments, people start boycotting workflows or revert to email
  • SharePoint development is the most painful form of development on the planet. Nothing sucks like SharePoint. I've seen a few developers contemplating quitting IT after working for over a year with MOSS.
  • No matter how the developers hate MOSS, no matter how long the most basic of projects take to roll out, no matter how novice the results look, and no matter how unsearchable and fragmented the content is:

EVERYONE STILL CONTINUES TO USE AND PURCHASE SHAREPOINT, AND MANAGERS STILL TRY VERY HARD TO PRETEND ITS NOT SATANS SPAWN.

Microformats

Using CSS classes originally designed for visual layout - now being assigned for both visual and contextual data is a hack, loads of ambiguity. Not saying the functionality should not exist, but fix the damn base language. HTML wasn't hacked to produce XML - instead the XML language emerged. Now we have these eager script kiddies hacking HTML and CSS to do something it wasn't designed to do, thats still fine, but I wish they would keep these things to themselves, and no make a standard out of it. Just to some up - butchery!


Pagination is never what the user wants

If you start having the discussion about where to do pagination, in the database, in the business logic, on the client, etc. then you are asking the wrong question. If your app is giving back more data than the user needs, figure out a way for the user to narrow down what they need based on real criteria, not arbitrary sized chunks. And if the user really does want all those results, then give them all the results. Who are you helping by giving back 20 at a time? The server? Is that more important than your user?

[EDIT: clarification, based on comments]

As a real world example, let's look at this Stack Overflow question. Let's say I have a controversial programming opinion. Before I post, I'd like to see if there is already an answer that addresses the same opinion, so I can upvote it. The only option I have is to click through every page of answers.

I would prefer one of these options:

  1. Allow me to search through the answers (a way for me to narrow down what I need based on real criteria).

  2. Allow me to see all the answers so I can use my browser's "find" option (give me all the results).

The same applies if I just want to find an answer I previously read, but can't find anymore. I don't know when it was posted or how many votes it has, so the sorting options don't help. And even if I did, I still have to play a guessing game to find the right page of results. The fact that the answers are paginated and I can directly click into one of a dozen pages is no help at all.

--
bmb


Write your spec when you are finished coding. (if at all)

In many projects I have been involved in, a great deal of effort was spent at the outset writing a "spec" in Microsoft Word. This process culminated in a "sign off" meeting when the big shots bought in on the project, and after that meeting nobody ever looked at this document again. These documents are a complete waste of time and don't reflect how software is actually designed. This is not to say there are not other valuable artifacts of application design. They are usually contained on index cards, snapshots of whiteboards, cocktail napkins and other similar media that provide a kind of timeline for the app design. These are usually are the real specs of the app. If you are going to write a Word document, (and I am not particularly saying you should) do it at the end of the project. At least it will accurately represent what has been done in the code and might help someone down the road like the the QA team or the next version developers.


Rob Pike wrote: "Data dominates. If you've chosen the right data structures and organized things well, the algorithms will almost always be self-evident. Data structures, not algorithms, are central to programming."

And since these days any serious data is in the millions of records, I content that good data modeling is the most important programming skill (whether using a rdbms or something like sqlite or amazon simpleDB or google appengine data storage.)

Fancy search and sorting algorithms aren't needed any more when the data, all the data, is stored in such a data storage system.


Code as Design: Three Essays by Jack W. Reeves

The source code of any software is its most accurate design document. Everything else (specs, docs, and sometimes comments) is either incorrect, outdated or misleading.

Guaranteed to get you fired pretty much everywhere.


Sometimes it's appropriate to swallow an exception.

For UI bells and wistles, prompting the user with an error message is interuptive, and there is ussually nothing for them to do anyway. In this case, I just log it, and deal with it when it shows up in the logs.


The code is the design


Don't write code, remove code!

As a smart teacher once told me: "Don't write code, Writing code is bad, Removing code is good. and if you have to write code - write small code..."


A degree in computer science does not---and is not supposed to---teach you to be a programmer.

Programming is a trade, computer science is a field of study. You can be a great programmer and a poor computer scientist and a great computer scientist and an awful programmer. It is important to understand the difference.

If you want to be a programmer, learn Java. If you want to be a computer scientist, learn at least three almost completely different languages. e.g. (assembler, c, lisp, ruby, smalltalk)


coding is not typing

It takes time to write the code. Most of the time in the editor window, you are just looking at the code, not actually typing. Not as often, but quite frequently, you are deleting what you have written. Or moving from one place to another. Or renaming.

If you are banging away at the keyboard for a long time you are doing something wrong.

Corollary: Number of lines of code written per day is not a linear measurement of a programmers productivity, as programmer that writes 100 lines in a day is quite likely a better programmer then the one that writes 20, but one that writes 5000 is certainly a bad programmer


Having a process that involves code being approved before it is merged onto the main line is a terrible idea. It breeds insecurity and laziness in developers, who, if they knew they could be screwing up dozens of people would be very careful about the changes they make, get lulled into a sense of not having to think about all the possible clients of the code they may be affecting. The person going over the code is less likely to have thought about it as much as the person writing it, so it can actually lead to poorer quality code being checked in... though, yes, it will probably follow all the style guidelines and be well commented :)


Relational Databases are a waste of time. Use object databases instead!

Relational database vendors try to fool us into believing that the only scaleable, persistent and safe storage in the world is relational databases. I am a certified DBA. Have you ever spent hours trying to optimize a query and had no idea what was going wrong? Relational databases don't let you make your own search paths when you need them. You give away much of the control over the speed of your app into the hands of people you've never met and they are not as smart as you think.

Sure, sometimes in a well-maintained database they come up with a quick answer for a complex query. But the price you pay for this is too high! You have to choose between writing raw SQL every time you want to read an entry of your data, which is dangerous. Or use an Object relational mapper which adds more complexity and things outside your control.

More importantly, you are actively forbidden from coming up with smart search algorithms, because every damn roundtrip to the database costs you around 11ms. It is too much. Imagine you know this super-graph algorithm which will answer a specific question, which might not even be expressible in SQL!, in due time. But even if your algorithm is linear, and interesting algorithms are not linear, forget about combining it with a relational database as enumerating a large table will take you hours!

Compare that with SandstoneDb, or Gemstone for Smalltalk! If you are into Java, give db4o a shot.

So, my advice is: Use an object-DB. Sure, they aren't perfect and some queries will be slower. But you will be surprised how many will be faster. Because loading the objects will not require all these strange transofmations between SQL and your domain data. And if you really need speed for a certain query, object databases have the query optimizer you should trust: your brain.


Inversion of control does not eliminate dependencies, but it sure does a great job of hiding them.


Two brains think better than one

I firmly believe that pair programming is the number one factor when it comes to increasing code quality and programming productivity. Unfortunatly it is also a highly controversial for management who believes that "more hands => more code => $$$!"


Programmers should never touch Word (or PowerPoint)

Unless you are developing a word or a document processing tool, you should not touch a Word processor that emits only binary blobs, and for that matter:

Generated XML files are binary blobs

Programmers should write plain text documents. The documents a programmer writes need to convey intention only, not formatting. It must be producible with the programming tool-chain: editor, version-control, search utilities, build system and the like. When you are already have and know how to use that tool-chain, every other document production tool is a horrible waste of time and effort.

When there is a need to produce a document for non-programmers, a lightweight markup language should be used such as reStructuredText (if you are writing a plain text file, you are probably writing your own lightweight markup anyway), and generate HTML, PDF, S5, etc. from it.


Regurgitating well known sayings by programming greats out of context with the zeal of a fanatic and the misplaced assumption that they are ironclad rules really gets my goat. For example 'premature optimization is the root of all evil' as covered by this thread.

IMO, many technical problems and solutions are very context sensitive and the notion of global best practices is a fallacy.


I have two:

Design patterns are sometimes a way for bad programmer to write bad code - "when you have a hammer - all the world looks like a nail" mentality. If there si something I hate to hear is two developers create design by patterns: "We should use command with facade ...".

There is no such thing as "premature optimization". You should profile and optimize the your code before you get to that point when it becomes too painful to do so.


As most others here, I try to adhere to principles like DRY and not being a human compiler.

Another strategy I want to push is "tell, don't ask". Instead of cluttering all objects with getters/setters essentially making a sieve of them, I'd like to tell them to do stuff.

This seems to got straight against good enterprise practices with dumb entity objects and thicker service layer(that does plenty of asking). Hmmm, thoughts?


1. You should not follow web standards - all the time.

2. You don't need to comment your code.

As long as it's understandable by a stranger.


It's okay to be Mort

Not everyone is a "rockstar" programmer; some of us do it because it's a good living, and we don't care about all the latest fads and trends; we just want to do our jobs.


A random collection of Cook's aphorisms...

  • The hardest language to learn is your second.

  • The hardest OS to learn is your second one - especially if your first was an IBM mainframe.

  • Once you've learned several seemingly different languages, you finally realize that all programming languages are the same - just minor differences in syntax.

  • Although one can be quite productive and marketable without having learned any assembly, no one will ever have a visceral understanding of computing without it.

  • Debuggers are the final refuge for programmers who don't really know what they're doing in the first place.

  • No OS will ever be stable if it doesn't make use of hardware memory management.

  • Low level systems programming is much, much easier than applications programming.

  • The programmer who has a favorite language is just playing.

  • Write the User's Guide FIRST!

  • Policy and procedure are intended for those who lack the initiative to perform otherwise.

  • (The Contractor's Creed): Tell'em what they need. Give'em what they want. Make sure the check clears.

  • If you don't find programming fun, get out of it or accept that although you may make a living at it, you'll never be more than average.

  • Just as the old farts have to learn the .NET method names, you'll have to learn the library calls. But there's nothing new there.
    The life of a programmer is one of constantly adapting to different environments, and the more tools you have hung on your belt, the more versatile and marketable you'll be.

  • You may piddle around a bit with little code chunks near the beginning to try out some ideas, but, in general, one doesn't start coding in earnest until you KNOW how the whole program or app is going to be layed out, and you KNOW that the whole thing is going to work EXACTLY as advertised. For most projects with at least some degree of complexity, I generally end up spending 60 to 70 percent of the time up front just percolating ideas.

  • Understand that programming has little to do with language and everything to do with algorithm. All of those nifty geegaws with memorable acronyms that folks have come up with over the years are just different ways of skinning the implementation cat. When you strip away all the OOPiness, RADology, Development Methodology 37, and Best Practice 42, you still have to deal with the basic building blocks of:

    • assignments
    • conditionals
    • iterations
    • control flow
    • I/O

Once you can truly wrap yourself around that, you'll eventually get to the point where you see (from a programming standpoint) little difference between writing an inventory app for an auto parts company, a graphical real-time TCP performance analyzer, a mathematical model of a stellar core, or an appointments calendar.

  • Beginning programmers work with small chunks of code. As they gain experience, they work with ever increasingly large chunks of code.
    As they gain even more experience, they work with small chunks of code.

Reuse of code is inversely proportional to its "reusability". Simply because "reusable" code is more complex, whereas quick hacks are easy to understand, so they get reused.

Software failures should take down the system, so that it can be examined and fixed. Software attempting to handle failure conditions is often worse than crashing. ie, is it better to have a system reset after crashing, or should it be indefinitely hung because the failure handler has a bug?


Unit Testing won't help you write good code

The only reason to have Unit tests is to make sure that code that already works doesn't break. Writing tests first, or writing code to the tests is ridiculous. If you write to the tests before the code, you won't even know what the edge cases are. You could have code that passes the tests but still fails in unforeseen circumstances.

And furthermore, good developers will keep cohesion low, which will make the addition of new code unlikely to cause problems with existing stuff.

In fact, I'll generalize that even further,

Most "Best Practices" in Software Engineering are there to keep bad programmers from doing too much damage.

They're there to hand-hold bad developers and keep them from making dumbass mistakes. Of course, since most developers are bad, this is a good thing, but good developers should get a pass.


The only "best practice" you should be using all the time is "Use Your Brain".

Too many people jumping on too many bandwagons and trying to force methods, patterns, frameworks etc onto things that don't warrant them. Just because something is new, or because someone respected has an opinion, doesn't mean it fits all :)

EDIT: Just to clarify - I don't think people should ignore best practices, valued opinions etc. Just that people shouldn't just blindly jump on something without thinking about WHY this "thing" is so great, IS it applicable to what I'm doing, and WHAT benefits/drawbacks does it bring?


Women make better programmers than men.

The female programmers I've worked with don't get wedded to "their" code as much as men do. They're much more open to criticism and new ideas.


Readability is the most important aspect of your code.

Even more so than correctness. If it's readable, it's easy to fix. It's also easy to optimize, easy to change, easy to understand. And hopefully other developers can learn something from it too.


The worst thing about recursion is recursion.


Controversial to self, because some things are better be left unsaid, so you won't be painted by others as too egotist. However, here it is:

If it is to be, it begins with me


It's a good idea to keep optimisation in mind when developing code.

Whenever I say this, people always reply: "premature optimisation is the root of all evil".

But I'm not saying optimise before you debug. I'm not even saying optimise ever, but when you're designing code, bear in mind the possibility that this might become a bottleneck, and write it so that it will be possible to refactor it for speed, without tearing the API apart.

Hugo


The code is the design


I believe the use of try/catch exception handling is worse than the use of simple return codes and associated common messaging structures to ferry useful error messages.

Littering code with try/catch blocks is not a solution.

Just passing exceptions up the stack hoping whats above you will do the right thing or generate an informative error is not a solution.

Thinking you have any chance of systematically verifying the proper exception handlers are avaliable to address anything that could go wrong in either transparent or opague objects is not realistic. (Think also in terms of late bindings/external libraries and unecessary dependancies between unrelated functions in a call stack as system evolves)

Use of return codes are simple, can be easily systematically verified for coverage and if handled properly forces developers to generate useful error messages rather than the all-too-common stack dumps and obscure I/O exceptions that are "exceptionally" meaningless to even the most clueful of end users.

--

My final objection is the use of garbage collected languages. Don't get me wrong.. I love them in some circumstances but in general for server/MC systems they have no place in my view.

GC is not infallable - even extremely well designed GC algorithms can hang on to objects too long or even forever based on non-obvious circular refrences in their dependancy graphs.

Non-GC systems following a few simple patterns and use of memory accounting tools don't have this problem but do require more work in design and test upfront than GC environments. The tradeoff here is that memory leaks are extremely easy to spot during testing in Non-GC while finding GC related problem conditions is a much more difficult proposition.

Memory is cheap but what happens when you leak expensive objects such as transaction handles, synchronization objects, socket connections...etc. In my environment the very thought that you can just sit back and let the language worry about this for you is unthinkable without significant fundental changes in software description.


A picture is not worth a thousand words.

Some pictures might be worth a thousand words. Most of them are not. This trite old aphorism is mostly untrue and is a pathetic excuse for many a lazy manager who did not want to read carefully created reports and documentation to say "I need you to show me in a diagram."

My wife studied for a linguistics major and saw several fascinating proofs against the conventional wisdom on pictures and logos: they do not break across language and cultural barriers, they usually do not communicate anywhere near as much information as correct text, they simply are no substitute for real communication.

In particular, labeled bubbles connected with lines are useless if the lines are unlabeled and unexplained, and/or if every line has a different meaning instead of signifying the same relationship (unless distinguished from each other in some way). If your lines sometimes signify relationships and sometimes indicate actions and sometimes indicate the passage of time, you're really hosed.

Every good programmer knows you use the tool suited for the job at hand, right? Not all systems are best specified and documented in pictures. Graphical specification languages that can be automatically turned into provably-correct, executable code or whatever are a spectacular idea, if such things exist. Use them when appropriate, not for everything under the sun. Entity-Relationship diagrams are great. But not everything can be summed up in a picture.

Note: a table may be worth its weight in gold. But a table is not the same thing as a picture. And again, a well-crafted short prose paragraph may be far more suitable for the job at hand.


It takes less time to produce well-documented code than poorly-documented code

When I say well-documented I mean with comments that communicate your intention clearly at every step. Yes, typing comments takes some time. And yes, your coworkers should all be smart enough to figure out what you intended just by reading your descriptive function and variable names and spelunking their way through all your executable statements. But it takes more of their time to do it than if you had just explained your intentions, and clear documentation is especially helpful when the logic of the code turns out to be wrong. Not that your code would ever be wrong...

I firmly believe that if you time it from when you start a project to when you ship a defect-free product, writing well-documented code takes less time. For one thing, having to explain clearly what you're doing forces you to think it through clearly, and if you can't write a clear, concise explanation of what your code is accomplishing then it's probably not designed well. And for another purely selfish reason, well-documented and well-structured code is far easier to dump onto someone else to maintain - thus freeing the original author to go create the next big thing. I rarely if ever have to stop what I'm doing to explain how my code was meant to work because it's blatantly obvious to anyone who can read English (even if they can't read C/C++/C# etc.). And one more reason is, frankly, my memory just isn't that good! I can't recall what I had for breakfast yesterday, much less what I was thinking when I wrote code a month or a year ago. Perhaps your memory is far better than mine, but because I document my intentions I can quickly pick up wherever I left off and make changes without having to first figure out what I was thinking when I wrote it.

That's why I document well - not because I feel some noble calling to produce pretty code fit for display, and not because I'm a purist, but simply because end-to-end it lets me ship quality software in less time.


JavaScript is a "messy" language but god help me I love it.


Design patterns are a waste of time when it comes to software design and development.

Don't get me wrong, design patterns are useful but mainly as a communication vector. They can express complex ideas very concisely: factory, singleton, iterator...

But they shouldn't serve as a development method. Too often developers architect their code using a flurry of design pattern-based classes where a more concise design would be better, both in term of readability and performance. All that with the illusion that individual classes could be reused outside their domain. If a class is not designed for reuse or isn't part of the interface, then it's an implementation detail.

Design patterns should be used to put names on organizational features, not to dictate the way code must be written.

(It was supposed to be controversial, remember?)


Tcl/Tk is the best GUI language/toolkit combo ever

It may lack specific widgets and be less good-looking than the new kids on the block, but its model is elegant and so easy to use that one can build working GUIs faster by typing commands interactively than by using a visual interface builder. Its expressive power is unbeatable: other solutions (Gtk, Java, .NET, MFC...) typically require ten to one hundred LOC to get the same result as a Tcl/Tk one-liner. All without even sacrificing readability or stability.

pack [label .l -text "Hello world!"] [button .b -text "Quit" -command exit]

Uncommented code is the bane of humanity.

I think that comments are necessary for code. They visually divide it up into logical parts, and provide an alternative representation when reading code.

Documentation comments are the bare minimum, but using comments to split up longer functions helps when writing new code and allows quicker analysis when returning to existing code.


Opinion: That frameworks and third part components should be only used as a last resort.

I often see programmers immediately pick a framework to accomplish a task without learning the underlying approach it takes to work. Something will inevitably break, or we'll find a limition we didn't account for and we'll be immediately stuck and have to rethink major part of a system. Frameworks are fine to use as long it is carefully thought out.


Every developer should spend several weeks, or even months, developing paper-based systems before they start building electronic ones. They should also then be forced to use their systems.

Developing a good paper-based system is hard work. It forces you to take into account human nature (cumbersome processes get ignored, ones that are too complex tend to break down), and teaches you to appreciate the value of simplicity (new work goes in this tray, work for QA goes in this tray, archiving goes in this box).

Once you've worked out how to build a system on paper, it's often a lot easier to build an effective computer system - one that people will actually want to (and be able to) use.

The systems we develop are not manned by an army of perfectly-trained automata; real people use them, real people who are trained by managers who are also real people and have far too little time to waste training them how to jump through your hoops.

In fact, for my second point:

Every developer should be required to run an interactive training course to show users how to use their software.


Writing extensive specifications is futile.
It's pretty difficult to write correct programs, but compilers, debuggers, unit tests, testers etc. make it possible to detect and eliminate most errors. On the other hand, when you write specs with a comparable level of detail like a program (i.e. pseudocode, UML), you are mostly on your own. Consider yourself lucky if you have a tool that helps you get the syntax right.

Extensive specifications are most likely bug riddled.
The chance that the writer got it right at the first try is about the same like the chance that a similarily large program is bugfree without ever being tested. Peer reviews eliminate some bugs, just like code reviews do.


To Be A Good Programmer really requires working in multiple aspects of the field: Application development, Systems (Kernel) work, User Interface Design, Database, and so on. There are certain approaches common to all, and certain approaches that are specific to one aspect of the job. You need to learn how to program Java like a Java coder, not like a C++ coder and vice versa. User Interface design is really hard, and uses a different part of your brain than coding, but implementing that UI in code is yet another skill as well. It is not just that there is no "one" approach to coding, but there is not just one type of coding.


That software can be bug free if you have the right tools and take the time to write it properly.


Getting paid to program is generally one of the worst uses of a man's time.

For one thing, you're in competition with the Elbonians, who work for a quarter a day. You need to convince your employer that you offer something the Elbonians never can, and that your something is worth a livable salary. As the Elbonians get more and more overseas business, the real advantage wears thin, and management knows it.

For another thing, you're spending time solving someone else's problems. That's time you could spend advancing your own interests, or working on problems that actually interest you. And if you think you're saving the world by working on the problems of other men, then why don't you just get the Elbonians to do it for you?

Last, the great innovations in software (visicalc, Napster, Pascal, etc) were not created by cubicle farms. They were created by one or two people without advance pay. You can't forcibly recreate that. It's just magic that sometimes happens when a competent programmer has a really good idea.

There is enough software. There are enough software developers. You don't have to be one for hire. Save your talents, your time, your hair, your marriage. Let someone else sell his soul to the keyboard. If you want to program, fine. But don't do it for the money.


I generally hold pretty controversial, strong and loud opinions, so here's just a couple of them:

"Because we're a Microsoft outfit/partner/specialist" is never a valid argument.

The company I'm working in now identifies itself, first and foremost, as a Microsoft specialist. So the aforementioned argument gets thrown around quite a bit, and I've yet to see a context where it's valid.

I can't see why it's a reason to promote Microsoft's technology and products in every applicable corner, overriding customer and employee satisfaction, and general pragmatics.

This just a cornerstone of my deep hatred towards politics in software business.

MOSS (Microsoft Office Sharepoint Server) is a piece of shit.

Kinda echoes the first opinion, but I honestly think MOSS, as it is, should be shot out of the market. It costs gazillions to license and set up, pukes on web standards and makes developers generally pretty unhappy. I have yet to see a MOSS project that has an overall positive outcome.

Yet time after time, a customer approaches us and asks for a MOSS solution.


"Java Sucks" - yeah, I know that opinion is definitely not held by all :)

I have that opinion because the majority of Java applications I've seen are memory hogs, run slowly, horrible user interface and so on.

G-Man


Estimates are for me, not for you

Estimates are a useful tool for me, as development line manager, to plan what my team is working on.

They are not a promise of a feature's delivery on a specific date, and they are not a stick for driving the team to work harder.

IMHO if you force developers to commit to estimates you get the safest possible figure.

For instance -

I think a feature will probably take me around 5 days. There's a small chance of an issue that would make it take 30 days.

If the estimates are just for planning then we'll all work to 5 days, and account for the small chance of an issue should it arise.

However - if meeting that estimate is required as a promise of delivery what estimate do you think gets given?

If a developer's bonus or job security depends on meeting an estimate do you think they give their most accurate guess or the one they're most certain they will meet?

This opinion of mine is controversial with other management, and has been interpreted as me trying to worm my way out of having proper targets, or me trying to cover up poor performance. It's a tough sell every time, but one that I've gotten used to making.


Non-development staff should not be allowed to manage development staff.

Correction: Staff with zero development experience should not be allowed to manage development staff.


Primitive data types are premature optimization.

There are languages that get by with just one data type, the scalar, and they do just fine. Other languages are not so fortunate. Developers just throw "int" and "double" in because they have to write in something.

What's important is not how big the data types are, but what the data is used for. If you have a day of the month variable, it doesn't matter much if it's signed or unsigned, or whether it's char, short, int, long, long long, float, double, or long double. It does matter that it's a day of the month, and not a month, or day of week, or whatever. See Joel's column on making things that are wrong look wrong; Hungarian notation as originally proposed was a Good Idea. As used in practice, it's mostly useless, because it says the wrong thing.


Social skills matter more than technical skills

Agreable but average programmers with good social skills will have a more successful carreer than outstanding programmers who are disagreable people.


MS Access* is a Real Development Tool and it can be used without shame by professional programmers

Just because a particular platform is a magnet for hacks and secretaries who think they are programmers shouldn't besmirch the platform itself. Every platform has its benefits and drawbacks.

Programmers who bemoan certain platforms or tools or belittle them as "toys" are more likely to be far less knowledgable about their craft than their ego has convinced them they are. It is a definite sign of overconfidence for me to hear a programmer bash any environment that they have not personally used extensively enough to know well.

* Insert just about any maligned tool (VB, PHP, etc.) here.


When Creating Unit tests for a Data Access Layer, data should be retrieved directly from the DB, not from mock objects.

Consider the following:

void IList<Customer> GetCustomers()
{
  List<Customer> res = new List<Customer>();

  DbCommand cmd = // initialize command
  IDataReader r = cmd.ExecuteQuery();

  while(r.read())
  {
     Customer c = ReadFiledsIntoCustomer(r);
     res.Add(c);
  }

  return res;
}

In a unit test for GetCustomers, should the call to cmd.ExecuteQuery() actually access the DB or should it's behavior be mocked?

I reckon that you shouldn't mock the actual call to the DB if the following holds true:

  1. A test server and the schema exist.
  2. The schema is stable (meaning you are not expecting major changes to it)
  3. The DAL has not smart logic: queries are constructed trivially (config/stored procs) and the desirialization logic is simple.

From my experience the great benefit of this approach is that you get to interact with the DB early, experiancing the 'feel', not just the 'look'. It saves you lots of headaches afterwards and is the best way to familiarize oneself with the schema.

Many might argue that as soon as the execution flow crosses the process boundaries- it seizes to be a unit test. I agree it has its drawbacks, especially when the DB is unavailable and then you cannot run UT.

However, I believe that this should be a valid thing to do in many cases.


1) The Business Apps farce:

I think that the whole "Enterprise" frameworks thing is smoke and mirrors. J2EE, .NET, the majority of the Apache frameworks and most abstractions to manage such things create far more complexity than they solve.

Take any regular Java or .NET ORM, or any supposedly modern MVC framework for either which does "magic" to solve tedious, simple tasks. You end up writing huge amounts of ugly XML boilerplate that is difficult to validate and write quickly. You have massive APIs where half of those are just to integrate the work of the other APIs, interfaces that are impossible to recycle, and abstract classes that are needed only to overcome the inflexibility of Java and C#. We simply don't need most of that.

How about all the different application servers with their own darned descriptor syntax, the overly complex database and groupware products?

The point of this is not that complexity==bad, it's that unnecessary complexity==bad. I've worked in massive enterprise installations where some of it was necessary, but even in most cases a few home-grown scripts and a simple web frontend is all that's needed to solve most use cases.

I'd try to replace all of these enterprisey apps with simple web frameworks, open source DBs, and trivial programming constructs.

2) The n-years-of-experience-required:

Unless you need a consultant or a technician to handle a specific issue related to an application, API or framework, then you don't really need someone with 5 years of experience in that application. What you need is a developer/admin who can read documentation, who has domain knowledge in whatever it is you're doing, and who can learn quickly. If you need to develop in some kind of language, a decent developer will pick it up in less than 2 months. If you need an administrator for X web server, in two days he should have read the man pages and newsgroups and be up to speed. Anything less and that person is not worth what he is paid.

3) The common "computer science" degree curriculum:

The majority of computer science and software engineering degrees are bull. If your first programming language is Java or C#, then you're doing something wrong. If you don't get several courses full of algebra and math, it's wrong. If you don't delve into functional programming, it's incomplete. If you can't apply loop invariants to a trivial for loop, you're not worth your salt as a supposed computer scientist. If you come out with experience in x and y languages and object orientation, it's full of s***. A real computer scientist sees a language in terms of the concepts and syntaxes it uses, and sees programming methodologies as one among many, and has such a good understanding of the underlying philosophies of both that picking new languages, design methods, or specification languages should be trivial.


Programmers who don't code in their spare time for fun will never become as good as those that do.

I think even the smartest and most talented people will never become truly good programmers unless they treat it as more than a job. Meaning that they do little projects on the side, or just mess with lots of different languages and ideas in their spare time.

(Note: I'm not saying good programmers do nothing else than programming, but they do more than program from 9 to 5)


Sometimes jumping on the bandwagon is ok

I get tired of people exhibiting "grandpa syndrome" ("You kids and your newfangled Test Driven Development. Every big technology that's come out in the last decade has sucked. Back in my day, we wrote real code!"... you get the idea).

Sometimes things that are popular are popular for a reason.


Using Stored Procedures

Unless you are writing a large procedural function composed of non-reusable SQL queries, please move your stored procedures of the database and into version control.


Controversial eh? I reckon the fact that C++ streams use << and >>. I hate it. They are shift operators. Overloading them in this way is plain bad practice. It makes me want to kill whoever came up with that and thought it was a good idea. GRRR.


C++ is one of the WORST programming languages - EVER.

It has all of the hallmarks of something designed by committee - it does not do any given job well, and does some jobs (like OO) terribly. It has a "kitchen sink" desperation to it that just won't go away.

It is a horrible "first language" to learn to program with. You get no elegance, no assistance (from the language). Instead you have bear traps and mine fields (memory management, templates, etc.).

It is not a good language to try to learn OO concepts. It behaves as "C with a class wrapper" instead of a proper OO language.

I could go on, but will leave it at that for now. I have never liked programming in C++, and although I "cut my teeth" on FORTRAN, I totally loved programming in C. I still think C was one of the great "classic" languages. Something that C++ is certainly NOT, in my opinion.

Cheers,

-R

EDIT: To respond to the comments on teaching C++. You can teach C++ in two ways - either teaching it as C "on steroids" (start with variables, conditions, loops, etc), or teaching it as a pure "OO" language (start with classes, methods, etc). You can find teaching texts that use one or other of these approaches. I prefer the latter approach (OO first) as it does emphasize the capabilities of C++ as an OO language (which was the original design emphasis of C++). If you want to teach C++ "as C", then I think you should teach C, not C++.

But the problem with C++ as a first language in my experience is that the language is simply too BIG to teach in one semester, plus most "intro" texts try and cover everything. It is simply not possible to cover all the topics in a "first language" course. You have to at least split it into 2 semesters, and then it's no longer "first language", IMO.

I do teach C++, but only as a "new language" - that is, you must be proficient in some prior "pure" language (not scripting or macros) before you can enroll in the course. C++ is a very fine "second language" to learn, IMO.

-R

'Nother Edit: (to Konrad)

I do not at all agree that C++ "is superior in every way" to C. I spent years coding C programs for microcontrollers and other embedded applications. The C compilers for these devices are highly optimized, often producing code as good as hand-coded assembler. When you move to C++, you gain a tremendous overhead imposed by the compiler in order to manage language features you may not use. In embedded applications, you gain little by adding classes and such, IMO. What you need is tight, clean code. You can write it in C++, but then you're really just writing C, and the C compilers are more optimized in these applications.

I wrote a MIDI engine, first in C, later in C++ (at the vendor's request) for an embedded controller (sound card). In the end, to meet the performance requirements (MIDI timings, etc) we had to revert to pure C for all of the core code. We were able to use C++ for the high-level code, and having classes was very sweet - but we needed C to get the performance at the lower level. The C code was an order of magnitude faster than the C++ code, but hand coded assembler was only slightly faster than the compiled C code. This was back in the early 1990s, just to place the events properly.

-R


Hibernate is useless and damaging to the minds of developers.


There is no "one size fits all" approach to development

I'm surprised that this is a controversial opinion, because it seems to me like common sense. However, there are many entries on popular blogs promoting the "one size fits all" approach to development so I think I may actually be in the minority.

Things I've seen being touted as the correct approach for any project - before any information is known about it - are things like the use of Test Driven Development (TDD), Domain Driven Design (DDD), Object-Relational Mapping (ORM), Agile (capital A), Object Orientation (OO), etc. etc. encompassing everything from methodologies to architectures to components. All with nice marketable acronyms, of course.

People even seem to go as far as putting badges on their blogs such as "I'm Test Driven" or similar, as if their strict adherence to a single approach whatever the details of the project project is actually a good thing.

It isn't.

Choosing the correct methodologies and architectures and components, etc., is something that should be done on a per-project basis, and depends not only on the type of project you're working on and its unique requirements, but also the size and ability of the team you're working with.


What strikes me as amusing about this question is that I've just read the first page of answers, and so far, I haven't found a single controversial opinion.

Perhaps that says more about the way stackoverflow generates consensus than anything else. Maybe I should have started at the bottom. :-)


90 percent of programmers are pretty damn bad programmers, and virtually all of us have absolutely no tools to evaluate our current ability level (although we can generally look back and realize how bad we USED to suck)

I wasn't going to post this because it pisses everyone off and I'm not really trying for a negative score or anything, but:

A) isn't that the point of the question, and

B) Most of the "Answers" in this thread prove this point

I heard a great analogy the other day: Programming abilities vary AT LEAST as much as sports abilities. How many of us could jump into a professional team and actually improve their chances?


MVC for the web should be far simpler than traditional MVC.

Traditional MVC involves code that "listens" for "events" so that the view can continually be updated to reflect the current state of the model. In the web paradigm however, the web server already does the listening, and the request is the event. Therefore MVC for the web need only be a specific instance of the mediator pattern: controllers mediating between views and the model. If a web framework is crafted properly, a re-usable core should probably not be more than 100 lines. That core need only implement the "page controller" paradigm but should be extensible so as to be able to support the "front controller" paradigm.

Below is a method that is the crux of my own framework, used successfully in an embedded consumer device manufactured by a Fortune 100 network hardware manufacturer, for a Fortune 50 media company. My approach has been likened to Smalltalk by a former Smalltalk programmer and author of an Oreilly book about the most prominent Java web framework ever; furthermore I have ported the same framework to mod_python/psp.

static function sendResponse(IBareBonesController $controller) {
  $controller->setMto($controller->applyInputToModel());
  $controller->mto->applyModelToView();
}

I work in ASP.NET / VB.NET a lot and find ViewState an absolute nightmare. It's enabled by default on the majority of fields and causes a large quantity of encoded data at the start of every web page. The bigger a page gets in terms of controls on a page, the larger the ViewState data will become. Most people don't turn an eye to it, but it creates a large set of data which is usually irrelevant to the tasks being carried on the page. You must manually disable this option on all ASP controls if they're not being used. It's either that or have custom controls for everything.

On some pages I work with, half of the page is made up of ViewState, which is a shame really as there's probably better ways of doing it.

That's just one small example I can think of in terms of language/technology opinions. It may be controversial.

By the way, you might want to edit voting on this thread, it could get quite heated by some ;)


A programming task is only fun while it's impossible, that is up til the point where you've convinced yourself you'll be able to solve it successfully.

This, I suppose, is why so many of my projects end up halfway finished in a folder called "to_be_continued".


Opinion: developers should be testing their own code

I've seen too much crap handed off to test only to have it not actually fix the bug in question, incurring communication overhead and fostering irresponsible practices.


Useful and clean high-level abstractions are significantly more important than performance

one example:

Too often I watch peers spending hours writing over complicated Sprocs, or massive LINQ queries which return unintuitive anonymous types for the sake of "performance".

They could achieve almost the same performance but with considerably cleaner, intuitive code.


Bad Programmers are Language-Agnostic

A really bad programmer can write bad code in almost any language.


2 space indent.

No discussion. It just has to be that way ;-)


We do a lot of development here using a Model-View-Controller framework we built. I'm often telling my developers that we need to violate the rules of the MVC design pattern to make the site run faster. This is a hard sell for developers, who are usually unwilling to sacrifice well-designed code for anything. But performance is our top priority in building web applications, so sometimes we have to make concessions in the framework.

For example, the view layer should never talk directly to the database, right? But if you are generating large reports, the app will use a lot of memory to pass that data up through the model and controller layers. If you have a database that supports cursors, it can make the app a lot faster to hit the database directly from the view layer.

Performance trumps development standards, that's my controversial view.


Any sufficiently capable library is too complicated to be useable and any library simple enough to be usable lacks that capabilities needed to be a good general solution.

I run in to this constantly. Exhaustive libraries that are so complicated to use I tear my hair out and simple easy to use libraries that don't quite do what I need them to do.


Cowboy coders get more done.

I spend my life in the startup atmosphere. Without the Cowboy coders we'd waste endless cycles making sure things are done "right".

As we know it's basically impossible to forsee all issues. The Cowboy coder runs head-on into these problems and is forced to solve them much more quickly than someone who tries to forsee them all.

Though, if you're Cowboy coding you had better refactor that spaghetti before someone else has to maintain it. ;) The best ones I know use continuous refactoring. They get a ton of stuff done, don't waste time trying to predict the future, and through refactoring it becomes maintainable code.

Process always gets in the way of a good Cowboy, no matter how Agile it is.


You can't write a web application without a remote debugger

Web applications typically tie together interactions between multiple languages on the client and server side, require interaction from a user and often include third-party code that can be anything from a simple API implementation to a byzantine framework.

I've lost count of the number of times I've had another developer sat with me while I step into and follow through what's actually going on in a complex web application with a decent remote debugger to see them flabbergasted and amazed that such tools exist. Often they still don't take the trouble to install and setup these kinds of tools even after seeing them in action.

You just can't debug a non trivial web application with print statements. Times ten if you didn't right all the code in your application.

If your debugger can step through all the various languages in use and show you the http transactions taking place then so much the better.

You can't develop web applications without Firebug

Along similar lines, once you have used Firebug (or very near equivalent) you look on anyone trying to develop web applications with a mixture of pity and horror. Particularly with Firebug showing computed styles, if you remember back to NOT using it and spending hours randomly changing various bits of CSS and adding "!important" in too many places to be funny you will never go back.


QA can be done well, over the long haul, without exploring all forms of testing

Lots of places seem to have an "approach", how "we do it". This seems to implicitly exclude other approaches.

This is a serious problem over the long term, because the primary function of QA is to file bugs -and- get them fixed.

You cannot do this well if you are not finding as many bugs as possible. When you exclude methodologies, for example, by being too black-box dependent, you start to ignore entire classes of discoverable coding errors. That means, by implication, you are making entire classes of coding errors unfixable, except when someone else stumbles on it.

The underlying problem often seems to be management + staff. Managers with this problem seem to have narrow thinking about the computer science and/or the value proposition of their team. They tend to create teams that reflect their approach, and a whitelist of testing methods.

I am not saying you can or should do everything all the time. Lets face it, some test methods are simply going to be a waste of time for a given product. And some methodologies are more useful at certain levels of product maturity. But what I think is missing is the ability of testing organizations to challenge themselves to learn new things, and apply that to their overall performance.

Here's a hypothetical conversation that would sum it up:

Me: You tested that startup script for 10 years, and you managed to learn NOTHING about shell scripts and how they work?!

Tester: Yes.

Me: Permissions?

Tester: The installer does that

Me: Platform, release-specific dependencies?

Tester: We file bugs for that

Me: Error handling?

Tester: when errors happen to customer support sends us some info.

Me: Okay...(starts thinking about writing post in stackoverflow...)


Before January 1st 1970, true and false were the other way around...


Programmers need to talk to customers

Some programmers believe that they don't need to be the ones talking to customers. It's a sure way for your company to write something absolutely brilliant which no one can work out what it's for or how it was intended to be used.

You can't expect product managers and business analysts to make all the decisions. In fact, programmers should be making 990 out of the 1000 (often small) decisions that go into creating a module or feature, otherwise the product would simply never ship! So make sure your decisions are informed. Understand your customers, work with them, watch them use your software.

If you're going the write the best code, you want people to use it. Take an interest in your user base and learn from the "dumb idiots" who are out there. Don't be afraid, they'll actually love you for it.


Premature optimization is NOT the root of all evil! Lack of proper planning is the root of all evil.

Remember the old naval saw

Proper Planning Prevents P*ss Poor Performance!


Most developers don't have a clue

Yup .. there you go. I've said it. I find that from all the developers that I personally know .. just a handful are actually good. Just a handful understand that code should be tested ... that the Object Oriented approach to developing is actually there to help you. It frustrates me to no end that there are people who get the title of developer while in fact all they can do is copy and paste a bit of source code and then execute it.

Anyway ... I'm glad initiatives like stackoverflow are being started. It's good for developers to wonder. Is there a better way? Am I doing it correctly? Perhaps I could use this technique to speed things up, etc ...

But nope ... the majority of developers just learn a language that they are required by their job and stick with it until they themselves become old and grumpy developers that have no clue what's going on. All they'll get is a big paycheck since they are simply older than you.

Ah well ... life is unjust in the IT community and I'll be taking steps to ignore such people in the future. Hooray!


That (at least during initial design), every Database Table (well, almost every one) should be clearly defined to contain some clearly understanable business entity or system-level domain abstraction, and that whether or not you use it as a a primary key and as Foreign Keys in other dependant tables, some column (attribute) or subset of the table attributes should be clearly defined to represent a unique key for that table (entity/abstraction). This is the only way to ensure that the overall table structure represents a logically consistent representation of the complete system data structure, without overlap or misunbderstood flattening. I am a firm believeer in using non-meaningful surrogate keys for Pks and Fks and join functionality, (for performance, ease of use, and other reasons), but I beleive the tendency in this direction has taken the database community too far away from the original Cobb principles, and we jhave lost much of the benefits (of database consistency) that natural keys provided.

So why not use both?


Having a process that involves code being approved before it is merged onto the main line is a terrible idea. It breeds insecurity and laziness in developers, who, if they knew they could be screwing up dozens of people would be very careful about the changes they make, get lulled into a sense of not having to think about all the possible clients of the code they may be affecting. The person going over the code is less likely to have thought about it as much as the person writing it, so it can actually lead to poorer quality code being checked in... though, yes, it will probably follow all the style guidelines and be well commented :)


My most controversial programming opinion is that finding performance problems is not about measuring, it is about capturing.

If you're hunting for elephants in a room (as opposed to mice) do you need to know how big they are? NO! All you have to do is look. Their very bigness is what makes them easy to find! It isn't necessary to measure them first.

The idea of measurement has been common wisdom at least since the paper on gprof (Susan L. Graham, et al 1982)*, when all along, right under our noses, has been a very simple and direct way to find code worth optimizing.

As a small example, here's how it works. Suppose you take 5 random-time samples of the call stack, and you happen to see a particular instruction on 3 out of 5 samples. What does that tell you?

.............   .............   .............   .............   .............
.............   .............   .............   .............   .............
Foo: call Bar   .............   .............   Foo: call Bar   .............
.............   Foo: call Bar   .............   .............   .............
.............   .............   .............   Foo: call Bar   .............
.............   .............   .............   .............   .............
                .............                                   .............

It tells you the program is spending 60% of its time doing work requested by that instruction. Removing it removes that 60%:

...\...../...   ...\...../...   .............   ...\...../...   .............
....\.../....   ....\.../....   .............   ....\.../....   .............
Foo: \a/l Bar   .....\./.....   .............   Foo: \a/l Bar   .............
......X......   Foo: cXll Bar   .............   ......X......   .............
...../.\.....   ...../.\.....   .............   Foo: /a\l Bar   .............
..../...\....   ..../...\....   .............   ..../...\....   .............
   /     \      .../.....\...                      /     \      .............

Roughly.

If you can remove the instruction (or invoke it a lot less), that's a 2.5x speedup, approximately. (Notice - recursion is irrelevant - if the elephant's pregnant, it's not any smaller.) Then you can repeat the process, until you truly approach an optimum.

  • This did not require accuracy of measurement, function timing, call counting, graphs, hundreds of samples, any of that typical profiling stuff.

Some people use this whenever they have a performance problem, and don't understand what's the big deal.

Most people have never heard of it, and when they do hear of it, think it is just an inferior mode of sampling. But it is very different, because it pinpoints problems by giving cost of call sites (as well as terminal instructions), as a percent of wall-clock time. Most profilers (not all), whether they use sampling or instrumentation, do not do that. Instead they give a variety of summary measurements that are, at best, clues to the possible location of problems. Here is a more extensive summary of the differences.

*In fact that paper claimed that the purpose of gprof was to "help the user evaluate alternative implementations of abstractions". It did not claim to help the user locate the code needing an alternative implementation, at a finer level then functions.


My second most controversial opinion is this, or it might be if it weren't so hard to understand.


XML is highly overrated

I think too many jump onto the XML bandwagon before using their brains... XML for web stuff is great, as it's designed for it. Otherwise I think some problem definition and design thoughts should preempt any decision to use it.

My 5 cents


Assembler is not dead

In my job (copy protection systems) assembler programming is essential, I was working with many hll copy protection systems and only assembler gives you the real power to utilize all the possibilities hidden in the code (like code mutation, low level stuff).

Also many code optimizations are possible only with an assembler programming, look at the sources of any video codecs, sources are written in assembler and optimized to use MMX/SSE/SSE2 opcodes whatever, many game engines uses assembler optimized routines, even Windows kernel has SSE optimized routines:

NTDLL.RtlMoveMemory

.text:7C902CD8                 push    ebp
.text:7C902CD9                 mov     ebp, esp
.text:7C902CDB                 push    esi
.text:7C902CDC                 push    edi
.text:7C902CDD                 push    ebx
.text:7C902CDE                 mov     esi, [ebp+0Ch]
.text:7C902CE1                 mov     edi, [ebp+8]
.text:7C902CE4                 mov     ecx, [ebp+10h]
.text:7C902CE7                 mov     eax, [esi]
.text:7C902CE9                 cld
.text:7C902CEA                 mov     edx, ecx
.text:7C902CEC                 and     ecx, 3Fh
.text:7C902CEF                 shr     edx, 6
.text:7C902CF2                 jz      loc_7C902EF2
.text:7C902CF8                 dec     edx
.text:7C902CF9                 jz      loc_7C902E77
.text:7C902CFF                 prefetchnta byte ptr [esi-80h]
.text:7C902D03                 dec     edx
.text:7C902D04                 jz      loc_7C902E03
.text:7C902D0A                 prefetchnta byte ptr [esi-40h]
.text:7C902D0E                 dec     edx
.text:7C902D0F                 jz      short loc_7C902D8F
.text:7C902D11
.text:7C902D11 loc_7C902D11:                           ; CODE XREF: .text:7C902D8Dj
.text:7C902D11                 prefetchnta byte ptr [esi+100h]
.text:7C902D18                 mov     eax, [esi]
.text:7C902D1A                 mov     ebx, [esi+4]
.text:7C902D1D                 movnti  [edi], eax
.text:7C902D20                 movnti  [edi+4], ebx
.text:7C902D24                 mov     eax, [esi+8]
.text:7C902D27                 mov     ebx, [esi+0Ch]
.text:7C902D2A                 movnti  [edi+8], eax
.text:7C902D2E                 movnti  [edi+0Ch], ebx
.text:7C902D32                 mov     eax, [esi+10h]
.text:7C902D35                 mov     ebx, [esi+14h]
.text:7C902D38                 movnti  [edi+10h], eax

So if you hear next time that assembler is dead, think about the last movie you have watched or the game you've played (and its copy protection heh).


There are far too many programmers who write far too much code.


Premature optimization is NOT the root of all evil! Lack of proper planning is the root of all evil.

Remember the old naval saw

Proper Planning Prevents P*ss Poor Performance!


The customer is not always right.

In most cases that I deal with, the customer is the product owner, aka "the business". All too often, developers just code and do not try to provide a vested stake in the product. There is too much of a misconception that the IT Department is a "company within a company", which is a load of utter garbage.

I feel my role is that of helping the business express their ideas - with the mutual understanding that I take an interest in understanding the business so that I can provide the best experience possible. And that route implies that there will be times that the product owner asks for something that he/she feels is the next revolution in computing leaving someone to either agree with that fact, or explain the more likely reason of why no one does something a certain way. It is mutually beneficial, because the product owner understands the thought that goes into the product, and the development team understands that they do more than sling code.

This has actually started to lead us down the path of increased productivity. How? Since the communication has improved due to disagreements on both sides of the table, it is more likely that we come together earlier in the process and come to a mutually beneficial solution to the product definition.


Notepad is a perfectly fine text editor. (And sometimes wordpad for non-windows line breaks)

  • Edit config files
  • View log files
  • Development

I know people who actually believe this! They will however use an IDE for development, but continue to use Notepad for everything else!


  1. Good architecture is grown, not designed.

  2. Managers should make sure their team members always work below their state of the art, whatever that level is. When people work withing their comfort zone they produce higher quality code.


You can't measure productivity by counting lines of code.

Everyone knows this, but for some reason the practice still persists!


Non-development staff should not be allowed to manage development staff.

Correction: Staff with zero development experience should not be allowed to manage development staff.


If you can only think of one way to do it, don't do it.

Whether it's an interface layout, a task flow, or a block of code, just stop. Do something to collect more ideas, like asking other people how they would do it, and don't go back to implementing until you have at least three completely different ideas and at least one crisis of confidence.

Generally, when I think something can only be done one way, or think only one method has any merit, it's because I haven't thought through the factors which ought to be influencing the design thoroughly enough. If I had, some of them would clearly be in conflict, leading to a mess and thus an actual decision rather than a rote default.

Being a solid programmer does not make you a solid interface designer

And following all of the interface guidelines in the world will only begin to help. If it's even humanly possible... There seems to be a peculiar addiction to making things 'cute' and 'clever'.


I work in ASP.NET / VB.NET a lot and find ViewState an absolute nightmare. It's enabled by default on the majority of fields and causes a large quantity of encoded data at the start of every web page. The bigger a page gets in terms of controls on a page, the larger the ViewState data will become. Most people don't turn an eye to it, but it creates a large set of data which is usually irrelevant to the tasks being carried on the page. You must manually disable this option on all ASP controls if they're not being used. It's either that or have custom controls for everything.

On some pages I work with, half of the page is made up of ViewState, which is a shame really as there's probably better ways of doing it.

That's just one small example I can think of in terms of language/technology opinions. It may be controversial.

By the way, you might want to edit voting on this thread, it could get quite heated by some ;)


Respect the Single Responsibility Principle

At first glance you might not think this would be controversial, but in my experience when I mention to another developer that they shouldn't be doing everything in the page load method they often push back ... so for the children please quit building the "do everything" method we see all to often.


If you're a developer, you should be able to write code

I did quite a bit of interviewing last year, and for my part of the interview I was supposed to test the way people thought, and how they implemented simple-to-moderate algorithms on a white board. I'd initially started out with questions like:

Given that Pi can be estimated using the function 4 * (1 - 1/3 + 1/5 - 1/7 + ...) with more terms giving greater accuracy, write a function that calculates Pi to an accuracy of 5 decimal places.

It's a problem that should make you think, but shouldn't be out of reach to a seasoned developer (it can be answered in about 10 lines of C#). However, many of our (supposedly pre-screened by the agency) candidates couldn't even begin to answer it, or even explain how they might go about answering it. So after a while I started asking simpler questions like:

Given the area of a circle is given by Pi times the radius squared, write a function to calculate the area of a circle.

Amazingly, more than half the candidates couldn't write this function in any language (I can read most popular languages so I let them use any language of their choice, including pseudo-code). We had "C# developers" who could not write this function in C#.

I was surprised by this. I had always thought that developers should be able to write code. It seems that, nowadays, this is a controversial opinion. Certainly it is amongst interview candidates!


Edit:

There's a lot of discussion in the comments about whether the first question is a good or bad one, and whether you should ask questions as complex as this in an interview. I'm not going to delve into this here (that's a whole new question) apart from to say you're largely missing the point of the post.

Yes, I said people couldn't make any headway with this, but the second question is trivial and many people couldn't make any headway with that one either! Anybody who calls themselves a developer should be able to write the answer to the second one in a few seconds without even thinking. And many can't.


Getting paid to program is generally one of the worst uses of a man's time.

For one thing, you're in competition with the Elbonians, who work for a quarter a day. You need to convince your employer that you offer something the Elbonians never can, and that your something is worth a livable salary. As the Elbonians get more and more overseas business, the real advantage wears thin, and management knows it.

For another thing, you're spending time solving someone else's problems. That's time you could spend advancing your own interests, or working on problems that actually interest you. And if you think you're saving the world by working on the problems of other men, then why don't you just get the Elbonians to do it for you?

Last, the great innovations in software (visicalc, Napster, Pascal, etc) were not created by cubicle farms. They were created by one or two people without advance pay. You can't forcibly recreate that. It's just magic that sometimes happens when a competent programmer has a really good idea.

There is enough software. There are enough software developers. You don't have to be one for hire. Save your talents, your time, your hair, your marriage. Let someone else sell his soul to the keyboard. If you want to program, fine. But don't do it for the money.


The more process you put around programming, the worse the code becomes

I have noticed something in my 8 or so years of programming, and it seems ridiculous. It's that the only way to get quality is to employ quality developers, and remove as much process and formality from them as you can. Unit testing, coding standards, code/peer reviews, etc only reduce quality, not increase it. It sounds crazy, because the opposite should be true (more unit testing should lead to better code, great coding standards should lead to more readable code, code reviews should improve the quality of code) but it's not.

I think it boils down to the fact we call it "Software Engineering" when really it's design and not engineering at all.


Some numbers to substantiate this statement:

From the Editor

IEEE Software, November/December 2001

Quantifying Soft Factors

by Steve McConnell

...

Limited Importance of Process Maturity

... In comparing medium-size projects (100,000 lines of code), the one with the worst process will require 1.43 times as much effort as the one with the best process, all other things being equal. In other words, the maximum influence of process maturity on a project’s productivity is 1.43. ...

... What Clark doesn’t emphasize is that for a program of 100,000 lines of code, several human-oriented factors influence productivity more than process does. ...

... The seniority-oriented factors alone (AEXP, LTEX, PEXP) exert an influence of 3.02. The seven personnel-oriented factors collectively (ACAP, AEXP, LTEX, PCAP, PCON, PEXP, and SITE §) exert a staggering influence range of 25.8! This simple fact accounts for much of the reason that non-process-oriented organizations such as Microsoft, Amazon.com, and other entrepreneurial powerhouses can experience industry-leading productivity while seemingly shortchanging process. ...

The Bottom Line

... It turns out that trading process sophistication for staff continuity, business domain experience, private offices, and other human-oriented factors is a sound economic tradeoff. Of course, the best organizations achieve high motivation and process sophistication at the same time, and that is the key challenge for any leading software organization.

§ Read the article for an explanation of these acronyms.


Don't use keywords for basic types if the language has the actual type exposed. In C#, this would refer to bool (Boolean), int (Int32), float (Single), long (Int64). 'int', 'bool', etc are not actual parts of the language, but rather just 'shortcuts' or 'aliases' for the actual type. Don't use something that doesn't exist! And in my opinion, Int16, Int32, Int64, Boolean, etc makes a heck of a lot more sense then 'short', 'long', 'int'.


SESE (Single Entry Single Exit) is not law

Example:

public int foo() {
   if( someCondition ) {
      return 0;
   }

   return -1;
}

vs:

public int foo() {
   int returnValue = -1;

   if( someCondition ) {
      returnValue = 0;
   }

   return returnValue;
}

My team and I have found that abiding by this all the time is actually counter-productive in many cases.


I think its fine to use goto-statements, if you use them in a sane way (and a sane programming language). They can often make your code a lot easier to read and don't force you to use some twisted logic just to get one simple thing done.


The ability to create UML diagrams similar to pretzels with mad cow disease is not actually a useful software development skill.

The whole point of diagramming code is to visualise connections, to see the shape of a design. But once you pass a certain rather low level of complexity, the visualisation is too much to process mentally. Making connections pictorially is only simple if you stick to straight lines, which typically makes the diagram much harder to read than if the connections were cleverly grouped and routed along the cardinal directions.

Use diagrams only for broad communication purposes, and only when they're understood to be lies.


Source Control: Anything But SourceSafe

Also: Exclusive locking is evil.

I once worked somewhere where they argued that exclusive locks meant that you were guaranteeing that people were not overwriting someone else's changes when you checked in. The problem was that in order to get any work done, if a file was locked devs would just change their local file to writable and merging (or overwriting) the source control with their version when they had the chance.


  • Xah Lee: actually has some pretty noteworthy and legitimate viewpoints if you can filter out all the invective, and rationally evaluate statements without agreeing (or disagreeing) based solely on the personality behind the statements. A lot of my "controversial" viewpoints have been echoed by him, and other notorious "trolls" who have criticized languages or tools I use(d) on a regular basis.

  • [Documentation Generators](http://en.wikipedia.or /wiki/Comparison_of_documentation_generators): ... the kind where the creator invented some custom-made especially-for-documenting-sourcecode roll-your-own syntax (including, but not limited to JavaDoc) are totally superfluous and a waste of time because:

    • 1) They are underused by the people who should be using them the most; and
    • 2) All of these mini-documentation-languages all of them could easily be replaced with YAML

The word 'evil' is an abused and overused word on Stackoverflow and simular forums.

People who use it have too little imagination.


Macros, Preprocessor instructions and Annotations are evil.

One syntax and language per file please!

// does not apply to Make files, or editor macros that insert real code.


Preconditions for arguments to methods/functions should be part of the language rather than programmers checking it always.


It's fine if you don't know. But you're fired if you can't even google it.

Internet is a tool. It's not making you stupider if you're learning from it.


Developers should be able to modify production code without getting permission from anyone as long as they document their changes and notify the appropriate parties.


Software engineers should not work with computer science guys

Their differences : SEs care about code reusability, while CSs just suss out code SEs care about performance, while CSs just want to have things done now SEs care about whole structure, while CSs do not give a toss ...


Test Constantly

You have to write tests, and you have to write them FIRST. Writing tests changes the way you write your code. It makes you think about what you want it to actually do before you just jump in and write something that does everything except what you want it to do.

It also gives you goals. Watching your tests go green gives you that little extra bump of confidence that you're getting something accomplished.

It also gives you a basis for writing tests for your edge cases. Since you wrote the code against tests to begin with, you probably have some hooks in your code to test with.

There is not excuse not to test your code. If you don't you're just lazy. I also think you should test first, as the benefits outweigh the extra time it takes to code this way.


Primitive data types are premature optimization.

There are languages that get by with just one data type, the scalar, and they do just fine. Other languages are not so fortunate. Developers just throw "int" and "double" in because they have to write in something.

What's important is not how big the data types are, but what the data is used for. If you have a day of the month variable, it doesn't matter much if it's signed or unsigned, or whether it's char, short, int, long, long long, float, double, or long double. It does matter that it's a day of the month, and not a month, or day of week, or whatever. See Joel's column on making things that are wrong look wrong; Hungarian notation as originally proposed was a Good Idea. As used in practice, it's mostly useless, because it says the wrong thing.


I believe the use of try/catch exception handling is worse than the use of simple return codes and associated common messaging structures to ferry useful error messages.

Littering code with try/catch blocks is not a solution.

Just passing exceptions up the stack hoping whats above you will do the right thing or generate an informative error is not a solution.

Thinking you have any chance of systematically verifying the proper exception handlers are avaliable to address anything that could go wrong in either transparent or opague objects is not realistic. (Think also in terms of late bindings/external libraries and unecessary dependancies between unrelated functions in a call stack as system evolves)

Use of return codes are simple, can be easily systematically verified for coverage and if handled properly forces developers to generate useful error messages rather than the all-too-common stack dumps and obscure I/O exceptions that are "exceptionally" meaningless to even the most clueful of end users.

--

My final objection is the use of garbage collected languages. Don't get me wrong.. I love them in some circumstances but in general for server/MC systems they have no place in my view.

GC is not infallable - even extremely well designed GC algorithms can hang on to objects too long or even forever based on non-obvious circular refrences in their dependancy graphs.

Non-GC systems following a few simple patterns and use of memory accounting tools don't have this problem but do require more work in design and test upfront than GC environments. The tradeoff here is that memory leaks are extremely easy to spot during testing in Non-GC while finding GC related problem conditions is a much more difficult proposition.

Memory is cheap but what happens when you leak expensive objects such as transaction handles, synchronization objects, socket connections...etc. In my environment the very thought that you can just sit back and let the language worry about this for you is unthinkable without significant fundental changes in software description.


HTML 5 + JavaScript will be the most used UI programming platform of the future.Flash,Silverlight,Java Applets etc. etc. are all going to die a silent death


Programmers take their (own little limited stupid) programming language as a sacrosanct religion.

Its so funny how programmers take these discussions almost like religious believers do: no critics allowed, (often) no objective discussion, (very often) arguing based upon very limited or absent knowledge and information. For a confirmation, just read the previous answers, and especially the comments.

Also funny and another confirmation: by definition of the question "give me a controversial opinion", any controversion opinion should NOT qualify for negative votes - actually the opposite: the more controversial, the better. But how do our programmers react: like Pavlov's dogs, voting negative on disliked opinions.

PS: I upvoted some others for fairness.


Okay, I said I'd give a bit more detail on my "sealed classes" opinion. I guess one way to show the kind of answer I'm interested in is to give one myself :)

Opinion: Classes should be sealed by default in C#

Reasoning:

There's no doubt that inheritance is powerful. However, it has to be somewhat guided. If someone derives from a base class in a way which is completely unexpected, this can break the assumptions in the base implementation. Consider two methods in the base class, where one calls another - if these methods are both virtual, then that implementation detail has to be documented, otherwise someone could quite reasonably override the second method and expect a call to the first one to work. And of course, as soon as the implementation is documented, it can't be changed... so you lose flexibility.

C# took a step in the right direction (relative to Java) by making methods sealed by default. However, I believe a further step - making classes sealed by default - would have been even better. In particular, it's easy to override methods (or not explicitly seal existing virtual methods which you don't override) so that you end up with unexpected behaviour. This wouldn't actually stop you from doing anything you can currently do - it's just changing a default, not changing the available options. It would be a "safer" default though, just like the default access in C# is always "the most private visibility available at that point."

By making people explicitly state that they wanted people to be able to derive from their classes, we'd be encouraging them to think about it a bit more. It would also help me with my laziness problem - while I know I should be sealing almost all of my classes, I rarely actually remember to do so :(

Counter-argument:

I can see an argument that says that a class which has no virtual methods can be derived from relatively safely without the extra inflexibility and documentation usually required. I'm not sure how to counter this one at the moment, other than to say that I believe the harm of accidentally-unsealed classes is greater than that of accidentally-sealed ones.


The users aren't idiots -- you are.

So many times I've heard developers say "so-and-so is an idiot" and my response is typically "he may be an idiot but you allowed him to be one."


Don't be shy, throw an exception. Exceptions are a perfectly valid way to signal failure, and are much clearer than any return-code system. "Exceptional" has nothing to do with how often this can happen, and everything to do with what the class considers normal execution conditions. Throwing an exception when a division by zero occurs is just fine, regardless of how often the case can happen. If the problem is likely, guard your code so that the method doesn't get called with incorrect arguments.


As there are hundreds of answers to this mine will probably end up unread, but here's my pet peeve anyway.

If you're a programmer then you're most likely awful at Web Design/Development

This website is a phenomenal resource for programmers, but an absolutely awful place to come if you're looking for XHTML/CSS help. Even the good Web Developers here are handing out links to resources that were good in the 90's!

Sure, XHTML and CSS are simple to learn. However, you're not just learning a language! You're learning how to use it well, and very few designers and developers can do that, let alone programmers. It took me ages to become a capable designer and even longer to become a good developer. I could code in HTML from the age of 10 but that didn't mean I was good. Now I am a capable designer in programs like Photoshop and Illustrator, I am perfectly able to write a good website in Notepad and am able to write basic scripts in several languages. Not only that but I have a good nose for Search Engine Optimisation techniques and can easily tell you where the majority of people are going wrong (hint: get some good content!).

Also, this place is a terrible resource for advice on web standards. You should NOT just write code to work in the different browsers. You should ALWAYS follow the standard to future-proof your code. More often than not the fixes you use on your websites will break when the next browser update comes along. Not only that but the good browsers follow standards anyway. Finally, the reason IE was allowed to ruin the Internet was because YOU allowed it by coding your websites for IE! If you're going to continue to do that for Firefox then we'll lose out yet again!

If you think that table-based layouts are as good, if not better than CSS layouts then you should not be allowed to talk on the subject, at least without me shooting you down first. Also, if you think W3Schools is the best resource to send someone to then you're just plain wrong.

If you're new to Web Design/Development don't bother with this place (it's full of programmers, not web developers). Go to a good Web Design/Development community like SitePoint.


Variable_Names_With_Bloody_Underscores

or even worse

CAPITALIZED_VARIABLE_NAMES_WITH_BLOODY_UNDERSCORES

should be globally expunged... with prejudice! CamelCapsAreJustFine. (Glolbal constants not withstanding)

GOTO statements are for use by developers under the age of 11

Any language that does not support pointers is not worthy of the name

.Net = .Bloat The finest example of microsoft's efforts for web site development (Expressionless Web 2) is the finest example of slow bloated cr@pw@re ever written. (try Web Studio instead)

Response: OK well let me address the Underscore issue a little. From the C link you provided:

-Global constants should be all caps with '_' separators. This I actually agree with because it is so BLOODY_OBVIOUS

-Take for example NetworkABCKey. Notice how the C from ABC and K from key are confused. Some people don't mind this and others just hate it so you'll find different policies in different code so you never know what to call something.

I fall into the former category. I choose names VERY carefully and if you cannot figure out in one glance that the K belongs to Key then english is probably not your first language.

  • C Function Names

    • In a C++ project there should be very few C functions.
    • For C functions use the GNU convention of all lower case letters with '_' as the word delimiter.

Justification

* It makes C functions very different from any C++ related names. 

Example

int some_bloody_function() { }

These "standards" and conventions are simply the arbitrary decisions handed down through time. I think that while they make a certain amount of logical sense, They clutter up code and make something that should be short and sweet to read, clumsy, long winded and cluttered.

C has been adopted as the de-facto standard, not because it is friendly, but because it is pervasive. I can write 100 lines of C code in 20 with a syntactically friendly high level language.

This makes the program flow easy to read, and as we all know, revisiting code after a year or more means following the breadcrumb trail all over the place.

I do use underscores but for global variables only as they are few and far between and they stick out clearly. Other than that, a well thought out CamelCaps() function/ variable name has yet to let me down!


Every developer should spend several weeks, or even months, developing paper-based systems before they start building electronic ones. They should also then be forced to use their systems.

Developing a good paper-based system is hard work. It forces you to take into account human nature (cumbersome processes get ignored, ones that are too complex tend to break down), and teaches you to appreciate the value of simplicity (new work goes in this tray, work for QA goes in this tray, archiving goes in this box).

Once you've worked out how to build a system on paper, it's often a lot easier to build an effective computer system - one that people will actually want to (and be able to) use.

The systems we develop are not manned by an army of perfectly-trained automata; real people use them, real people who are trained by managers who are also real people and have far too little time to waste training them how to jump through your hoops.

In fact, for my second point:

Every developer should be required to run an interactive training course to show users how to use their software.


That best practices are a hazard because they ask us to substitute slogans for thinking.


Opinion: Not having function definitions, and return types can lead to flexible and readable code.

This opinion probably applies more to interpreted languages than compiled. Requiring a return type, and a function argument list, are great for things like intellisense to auto document your code, but they are also restrictions.

Now don't get me wrong, I am not saying throw away return types, or argument lists. They have their place. And 90% of the time they are more of a benefit than a hindrance.

There are times and places when this is useful.


Tcl/Tk is the best GUI language/toolkit combo ever

It may lack specific widgets and be less good-looking than the new kids on the block, but its model is elegant and so easy to use that one can build working GUIs faster by typing commands interactively than by using a visual interface builder. Its expressive power is unbeatable: other solutions (Gtk, Java, .NET, MFC...) typically require ten to one hundred LOC to get the same result as a Tcl/Tk one-liner. All without even sacrificing readability or stability.

pack [label .l -text "Hello world!"] [button .b -text "Quit" -command exit]

Recursion is fun.

Yes, I know it can be an ineffectual use of stack space, and all that jazz. But some times a recursive algorithm is just so nice and clean compared to it's iterative counterpart. I always get a bit gleeful when I can sneak a recursive function in somewhere.


Software Development is a VERY small subset of Computer Science.

People sometimes seem to think the two are synonymous, but in reality there are so many aspects to computer science that the average developer rarely (if ever) gets exposed to. Depending on one's career goals, I think there are a lot of CS graduates out there who would probably have been better off with some sort of Software Engineering education.

I value education highly, have a BS in Computer science and am pursuing a MS in it part time, but I think that many people who obtain these degrees treat the degree as a means to an end and benefit very little. I know plenty of people who took the same Systems Software course I took, wrote the same assembler I wrote, and to this day see no value in what they did.


Most comments in code are in fact a pernicious form of code duplication.

We spend most of our time maintaining code written by others (or ourselves) and poor, incorrect, outdated, misleading comments must be near the top of the list of most annoying artifacts in code.

I think eventually many people just blank them out, especially those flowerbox monstrosities.

Much better to concentrate on making the code readable, refactoring as necessary, and minimising idioms and quirkiness.

On the other hand, many courses teach that comments are very nearly more important than the code itself, leading to the this next line adds one to invoiceTotal style of commenting.


Opinion: explicit variable declaration is a great thing.

I'll never understand the "wisdom" of letting the developer waste costly time tracking down runtime errors caused by variable name typos instead of simply letting the compiler/interpreter catch them.

Nobody's ever given me an explanation better than "well it saves time since I don't have to write 'int i;'." Uhhhhh... yeah, sure, but how much time does it take to track down a runtime error?


C++ is future killer language...

... of dynamic languages.

nobody owns it, has a growing set of features like compile-time (meta-)programming or type inference, callbacks without the overhead of function calls, doesn't enforce a single approach (multi-paradigm). POSIX and ECMAScript regular expressions. multiple return values. you can have named arguments. etc etc.

things move really slowly in programming. it took JavaScript 10 years to get off the ground (mostly because of performance), and most of people who program in it still don't get it (classes in JS? c'mon!). i'd say C++ will really start shining in 15-20 years from now. that seems to me like about the right amount of time for C++ (the language as well as compiler vendors) and critical mass of programmers who today write in dynamic languages to converge.

C++ needs to become more programmer-friendly (compiler errors generated from templates or compile times in the presence of same), and the programmers need to realize that static typing is a boon (it's already in progress, see other answer here which asserts that good code written in a dynamically typed language is written as if the language was statically typed).


A degree in Computer Science or other IT area DOES make you a more well rounded programmer

I don't care how many years of experience you have, how many blogs you've read, how many open source projects you're involved in. A qualification (I'd recommend longer than 3 years) exposes you to a different way of thinking and gives you a great foundation.

Just because you've written some better code than a guy with a BSc in Computer Science, does not mean you are better than him. What you have he can pick up in an instant which is not the case the other way around.

Having a qualification shows your commitment, the fact that you would go above and beyond experience to make you a better developer. Developers which are good at what they do AND have a qualification can be very intimidating.

I would not be surprized if this answer gets voted down.

Also, once you have a qualification, you slowly stop comparing yourself to those with qualifications (my experience). You realize that it all doesn't matter at the end, as long as you can work well together.

Always act mercifully towards other developers, irrespective of qualifications.


Debuggers are a crutch.

It's so controversial that even I don't believe it as much as I used to.

Con: I spend more time getting up to speed on other people's voluminous code, so anything that help with "how did I get here" and "what is happening" either pre-mortem or post-mortem can be helpful.

Pro: However, I happily stand by the idea that if you don't understand the answers to those questions for code that you developed yourself or that you've become familiar with, spending all your time in a debugger is not the solution, it's part of the problem.

Before hitting 'Post Your Answer' I did a quick Google check for this exact phrase, it turns out that I'm not the only one who has held this opinion or used this phrase. I turned up a long discussion of this very question on the Fog Creek software forum, which cited various luminaries including Linus Torvalds as notable proponents.


I believe that the "Let's Rewrite The Past And Try To Fix That Bug Pretending Nothing Ever Worked" is a valuable debugging mantra in desperate situations:

https://stackoverflow.com/questions/978904/do-you-use-the-orwellian-past-rewriting-debugging-philosophy-closed


Delphi is fun

Yes, I know it's outdated, but Delphi was and is a very fun tool to develop with.


1-based arrays should always be used instead of 0-based arrays. 0-based arrays are unnatural, unnecessary, and error prone.

When I count apples or employees or widgets I start at one, not zero. I teach my kids the same thing. There is no such thing as a 0th apple or 0th employee or 0th widget. Using 1 as the base for an array is much more intuitive and less error-prone. Forget about plus-one-minus-one-hell (as we used to call it). 0-based arrays are an unnatural construct invented by the computer science - they do not reflect reality and computer programs should reflect reality as much as possible.


Debuggers should be forbidden. This would force people to write code that is testable through unit tests, and in the end would lead to much better code quality.

Remove Copy & Paste from ALL programming IDEs. Copy & pasted code is very bad, this option should be completely removed. Then the programmer will hopefully be too lazy to retype all the code so he makes a function and reuses the code.

Whenever you use a Singleton, slap yourself. Singletons are almost never necessary, and are most of the time just a fancy name for a global variable.


2 space indent.

No discussion. It just has to be that way ;-)


1) The Business Apps farce:

I think that the whole "Enterprise" frameworks thing is smoke and mirrors. J2EE, .NET, the majority of the Apache frameworks and most abstractions to manage such things create far more complexity than they solve.

Take any regular Java or .NET ORM, or any supposedly modern MVC framework for either which does "magic" to solve tedious, simple tasks. You end up writing huge amounts of ugly XML boilerplate that is difficult to validate and write quickly. You have massive APIs where half of those are just to integrate the work of the other APIs, interfaces that are impossible to recycle, and abstract classes that are needed only to overcome the inflexibility of Java and C#. We simply don't need most of that.

How about all the different application servers with their own darned descriptor syntax, the overly complex database and groupware products?

The point of this is not that complexity==bad, it's that unnecessary complexity==bad. I've worked in massive enterprise installations where some of it was necessary, but even in most cases a few home-grown scripts and a simple web frontend is all that's needed to solve most use cases.

I'd try to replace all of these enterprisey apps with simple web frameworks, open source DBs, and trivial programming constructs.

2) The n-years-of-experience-required:

Unless you need a consultant or a technician to handle a specific issue related to an application, API or framework, then you don't really need someone with 5 years of experience in that application. What you need is a developer/admin who can read documentation, who has domain knowledge in whatever it is you're doing, and who can learn quickly. If you need to develop in some kind of language, a decent developer will pick it up in less than 2 months. If you need an administrator for X web server, in two days he should have read the man pages and newsgroups and be up to speed. Anything less and that person is not worth what he is paid.

3) The common "computer science" degree curriculum:

The majority of computer science and software engineering degrees are bull. If your first programming language is Java or C#, then you're doing something wrong. If you don't get several courses full of algebra and math, it's wrong. If you don't delve into functional programming, it's incomplete. If you can't apply loop invariants to a trivial for loop, you're not worth your salt as a supposed computer scientist. If you come out with experience in x and y languages and object orientation, it's full of s***. A real computer scientist sees a language in terms of the concepts and syntaxes it uses, and sees programming methodologies as one among many, and has such a good understanding of the underlying philosophies of both that picking new languages, design methods, or specification languages should be trivial.


What strikes me as amusing about this question is that I've just read the first page of answers, and so far, I haven't found a single controversial opinion.

Perhaps that says more about the way stackoverflow generates consensus than anything else. Maybe I should have started at the bottom. :-)


Reuse of code is inversely proportional to its "reusability". Simply because "reusable" code is more complex, whereas quick hacks are easy to understand, so they get reused.

Software failures should take down the system, so that it can be examined and fixed. Software attempting to handle failure conditions is often worse than crashing. ie, is it better to have a system reset after crashing, or should it be indefinitely hung because the failure handler has a bug?


According to the amount of feedback I've gotten, my most controversial opinion, apparently, is that programmers don't always read the books they claim to have read. This is followed closely by my opinion that a programmer with a formal education is better than the same programmer who is self-taught (but not necessarily better than a different programmer who is self-taught).


QA should know the code (indirectly) better than development. QA gets paid to find things development didn't intend to happen, and they often do. :) (Btw, I'm a developer who just values good QA guys a whole bunch -- far to few of them... far to few).


My controversial opinion: OO Programming is vastly overrated [and treated like a silver bullet], when it is really just another tool in the toolbox, nothing more!


The best programmers trace all their code in the debugger and test all paths.

Well... the OP said controversial!


If you can only think of one way to do it, don't do it.

Whether it's an interface layout, a task flow, or a block of code, just stop. Do something to collect more ideas, like asking other people how they would do it, and don't go back to implementing until you have at least three completely different ideas and at least one crisis of confidence.

Generally, when I think something can only be done one way, or think only one method has any merit, it's because I haven't thought through the factors which ought to be influencing the design thoroughly enough. If I had, some of them would clearly be in conflict, leading to a mess and thus an actual decision rather than a rote default.

Being a solid programmer does not make you a solid interface designer

And following all of the interface guidelines in the world will only begin to help. If it's even humanly possible... There seems to be a peculiar addiction to making things 'cute' and 'clever'.


Opinion: Never ever have different code between "debug" and "release" builds

The main reason being that release code almost never gets tested. Better to have the same code running in test as it is in the wild.


All source code and comments should be written in English

Writing source code and/or comments in languages other than English makes it less reusable and more difficult to debug if you don't understand the language they are written in.

Same goes for SQL tables, views, and columns, especially when abbrevations are used. If they aren't abbreviated, I might be able to translate the table/column name on-line, but if they're abbreviated all I can do is SELECT and try to decipher the results.


Lazy Programmers are the Best Programmers

A lazy programmer most often finds ways to decrease the amount of time spent writing code (especially a lot of similar or repeating code). This often translates into tools and workflows that other developers in the company/team can benefit from.

As the developer encounters similar projects he may create tools to bootstrap the development process (e.g. creating a DRM layer that works with the company's database design paradigms).

Furthermore, developers such as these often use some form of code generation. This means all bugs of the same type (for example, the code generator did not check for null parameters on all methods) can often be fixed by fixing the generator and not the 50+ instances of that bug.

A lazy programmer may take a few more hours to get the first product out the door, but will save you months down the line.


My one:

Long switch statements are your friends. Really. At least in C#.

People tend to avoid and discourage others to use long switch statements beause they are "unmanagable" and "have bad performance characteristics".

Well, the thing is that in C#, switch statements are always compiled automagically to hash jump tables so actually using them is the Best Thing To Do™ in terms of performance if you need simple branching to multiple branches. Also, if the case statements are organized and grouped intelligently (for example in alphabetical order), they are not unmanageable at all.


I fail to understand why people think that Java is absolutely the best "first" programming language to be taught in universities.

For one, I believe that first programming language should be such that it highlights the need to learn control flow and variables, not objects and syntax

For another, I believe that people who have not had experience in debugging memory leaks in C / C++ cannot fully appreciate what Java brings to the table.

Also the natural progression should be from "how can I do this" to "how can I find the library which does that" and not the other way round.


Arrays should by default be 1-based rather than 0-based. This is not necessarily the case with system implementation languages, but languages like Java swallowed more C oddities than they should have. "Element 1" should be the first element, not the second, to avoid confusion.

Computer science is not software development. You wouldn't hire an engineer who studied only physics, after all.

Learn as much mathematics as is feasible. You won't use most of it, but you need to be able to think that way to be good at software.

The single best programming language yet standardized is Common Lisp, even if it is verbose and has zero-based arrays. That comes largely from being designed as a way to write computations, rather than as an abstraction of a von Neumann machine.

At least 90% of all comparative criticism of programming languages can be reduced to "Language A has feature C, and I don't know how to do C or something equivalent in Language B, so Language A is better."

"Best practices" is the most impressive way to spell "mediocrity" I've ever seen.


Stay away from Celko!!!!

http://www.dbdebunk.com/page/page/857309.htm

I think it makes a lot more sense to use surrogate primary keys then "natural" primary keys.


@ocdecio: Fabian Pascal gives (in chapter 3 of his book Practical issues in database management, cited in point 3 at the page that you link) as one of the criteria for choosing a key that of stability (it always exists and doesn't change). When a natural key does not possesses such property, than a surrogate key must be used, for evident reasons, to which you hint in comments.

You don't know what he wrote and you have not bothered to check, otherwise you could discover that you actually agree with him. Nothing controversial there: he was saying "don't be dogmatic, adapt general guidelines to circumstances, and, above all, think, use your brain instead of a dogmatic/cookbook/words-of-guru approach".


Not all programmers are created equal

Quite often managers think that DeveloperA == DeveloperB simply because they have same level of experience and so on. In actual fact, the performance of one developer can be 10x or even 100x that of another.

It's politically risky to talk about it, but sometimes I feel like pointing out that, even though several team members may appear to be of equal skill, it's not always the case. I have even seen cases where lead developers were 'beyond hope' and junior devs did all the actual work - I made sure they got the credit, though. :)


When Creating Unit tests for a Data Access Layer, data should be retrieved directly from the DB, not from mock objects.

Consider the following:

void IList<Customer> GetCustomers()
{
  List<Customer> res = new List<Customer>();

  DbCommand cmd = // initialize command
  IDataReader r = cmd.ExecuteQuery();

  while(r.read())
  {
     Customer c = ReadFiledsIntoCustomer(r);
     res.Add(c);
  }

  return res;
}

In a unit test for GetCustomers, should the call to cmd.ExecuteQuery() actually access the DB or should it's behavior be mocked?

I reckon that you shouldn't mock the actual call to the DB if the following holds true:

  1. A test server and the schema exist.
  2. The schema is stable (meaning you are not expecting major changes to it)
  3. The DAL has not smart logic: queries are constructed trivially (config/stored procs) and the desirialization logic is simple.

From my experience the great benefit of this approach is that you get to interact with the DB early, experiancing the 'feel', not just the 'look'. It saves you lots of headaches afterwards and is the best way to familiarize oneself with the schema.

Many might argue that as soon as the execution flow crosses the process boundaries- it seizes to be a unit test. I agree it has its drawbacks, especially when the DB is unavailable and then you cannot run UT.

However, I believe that this should be a valid thing to do in many cases.


My controversial opinion: OO Programming is vastly overrated [and treated like a silver bullet], when it is really just another tool in the toolbox, nothing more!


Readability is the most important aspect of your code.

Even more so than correctness. If it's readable, it's easy to fix. It's also easy to optimize, easy to change, easy to understand. And hopefully other developers can learn something from it too.


It's ok to write garbage code once in a while

Sometimes a quick and dirty piece of garbage code is all that is needed to fulfill a particular task. Patterns, ORMs, SRP, whatever... Throw up a Console or Web App, write some inline sql ( feels good ), and blast out the requirement.


There is no difference between software developer, coder, programmer, architect ...

I've been in the industry for more than 10 yeast and still find it absolutely idiotic to try to distinguish between these "roles". You write code? You're a developer. You are spending all day drawing fancy UML diagrams. You're a ... well.. I have no idea what you are, you're probably just trying to impress somebody. (Yes, I know UML).


Storing XML in a CLOB in a relational database is often a horrible cop-out. Not only is it hideous in terms of performance, it shifts responsibility for correctly managing structure of the data away from the database architect and onto the application programmer.


This one is mostly web related but...

Use Tables for your web page layouts

If I was developing a gigantic site that needed to squeeze performance I might think about it, but nothing gives me an easier way to get a consistent look out on the browser than tables. The majority of applications that I develop are for around 100-1000 users and possible 100 at a time max. The extra bloat of the tables aren't killing my server by any means.


Once i saw the following from a co-worker:

equal = a.CompareTo(b) == 0;

I stated that he cannot assume that in a general case, but he just laughed.


Inversion of control does not eliminate dependencies, but it sure does a great job of hiding them.


Programming is neither art nor science. It is an engineering discipline.

It's not art: programming requires creativity for sure. That doesn't make it art. Code is designed and written to work properly, not to be emotionally moving. Except for whitespace, changing code for aesthetic reasons breaks your code. While code can be beautiful, art is not the primary purpose.

It's not science: science and technology are inseparable, but programming is in the technology category. Programming is not systematic study and observation; it is design and implementation.

It's an engineering discipline: programmers design and build things. Good programmers design for function. They understand the trade-offs of different implementation options and choose the one that suits the problem they are solving.


I'm sure there are those out there who would love to parse words, stretching the definitions of art and science to include programming or constraining engineering to mechanical machines or hardware only. Check the dictionary. Also "The Art of Computer Programming" is a different usage of art that means a skill or craft, as in "the art of conversation." The product of programming is not art.


Relational Databases are a waste of time. Use object databases instead!

Relational database vendors try to fool us into believing that the only scaleable, persistent and safe storage in the world is relational databases. I am a certified DBA. Have you ever spent hours trying to optimize a query and had no idea what was going wrong? Relational databases don't let you make your own search paths when you need them. You give away much of the control over the speed of your app into the hands of people you've never met and they are not as smart as you think.

Sure, sometimes in a well-maintained database they come up with a quick answer for a complex query. But the price you pay for this is too high! You have to choose between writing raw SQL every time you want to read an entry of your data, which is dangerous. Or use an Object relational mapper which adds more complexity and things outside your control.

More importantly, you are actively forbidden from coming up with smart search algorithms, because every damn roundtrip to the database costs you around 11ms. It is too much. Imagine you know this super-graph algorithm which will answer a specific question, which might not even be expressible in SQL!, in due time. But even if your algorithm is linear, and interesting algorithms are not linear, forget about combining it with a relational database as enumerating a large table will take you hours!

Compare that with SandstoneDb, or Gemstone for Smalltalk! If you are into Java, give db4o a shot.

So, my advice is: Use an object-DB. Sure, they aren't perfect and some queries will be slower. But you will be surprised how many will be faster. Because loading the objects will not require all these strange transofmations between SQL and your domain data. And if you really need speed for a certain query, object databases have the query optimizer you should trust: your brain.


The worst thing about recursion is recursion.


Good Performance VS Elegant Design

They are not mutually exclusive but I can't stand over-designed class structures/frameworks that completely have no clue about performance. I don't need to have a string of new This(new That(new Whatever())); to create an object that will tell me it's 5 AM in the morning oh by the way, it's 217 days until Obama's birthday, and the weekend is 2 days away. I only wanted to know if the gym was open.

Having balance between the 2 are crucial. The code needs to get nasty when you need to pump out all the processor do something intensive such as reading terabytes of data. Save the elegance for the places that consume the 10% of resources which is probably more than 90% of the code.


Your job is to put yourself out of work.

When you're writing software for your employer, any software that you create is to be written in such a way that it can be picked up by any developer and understood with a minimal amount of effort. It is well designed, clearly and consistently written, formatted cleanly, documented where it needs to be, builds daily as expected, checked into the repository, and appropriately versioned.

If you get hit by a bus, laid off, fired, or walk off the job, your employer should be able to replace you on a moment's notice, and the next guy could step into your role, pick up your code and be up and running within a week tops. If he or she can't do that, then you've failed miserably.

Interestingly, I've found that having that goal has made me more valuable to my employers. The more I strive to be disposable, the more valuable I become to them.


Most of programming job interview questions are pointless. Especially those figured out by programmers.

It is a common case, at least according to my & my friends experience, where a puffed up programmer, asks you some tricky wtf he spent weeks googling for. The funny thing about that is, you get home and google it within a minute. It's like they often try to beat you up with their sophisticated weapons, instead of checking if you'd be a comprehensive, pragmatic team player to work with.

Similar stupidity IMO is when you're being asked for highly accessible fundamentals, like: "Oh wait, let me see if you can pseudo-code that insert_name_here-algorithm on a sheet of paper (sic!)". Do I really need to remember it while applying for a high-level programming job? Should I efficiently solve problems or puzzles?


Not everything needs to be encapsulated into its own method. Some times it is ok to have a method do more then one thing.


Opinion: Unit tests don't need to be written up front, and sometimes not at all.

Reasoning: Developers suck at testing their own code. We do. That's why we generally have test teams or QA groups.

Most of the time the code we write is to intertwined with other code to be tested separately, so we end up jumping through patterned hoops to provide testability. Not that those patterns are bad, but they can sometimes add unnecessary complexity, all for the sake of unit testing...

... which often doesn't work anyway. To write a comprehensive unit test requires alot of time. Often more time than we're willing to give. And the more comprehensive the test, the more brittle it becomes if the interface of the thing it's testing changes, forcing a rewrite of a test that no longer compiles.


Software is not an engineering discipline.

We never should have let the computers escape from the math department.


My one:

Long switch statements are your friends. Really. At least in C#.

People tend to avoid and discourage others to use long switch statements beause they are "unmanagable" and "have bad performance characteristics".

Well, the thing is that in C#, switch statements are always compiled automagically to hash jump tables so actually using them is the Best Thing To Do™ in terms of performance if you need simple branching to multiple branches. Also, if the case statements are organized and grouped intelligently (for example in alphabetical order), they are not unmanageable at all.


  • Xah Lee: actually has some pretty noteworthy and legitimate viewpoints if you can filter out all the invective, and rationally evaluate statements without agreeing (or disagreeing) based solely on the personality behind the statements. A lot of my "controversial" viewpoints have been echoed by him, and other notorious "trolls" who have criticized languages or tools I use(d) on a regular basis.

  • [Documentation Generators](http://en.wikipedia.or /wiki/Comparison_of_documentation_generators): ... the kind where the creator invented some custom-made especially-for-documenting-sourcecode roll-your-own syntax (including, but not limited to JavaDoc) are totally superfluous and a waste of time because:

    • 1) They are underused by the people who should be using them the most; and
    • 2) All of these mini-documentation-languages all of them could easily be replaced with YAML

Relational databases are awful for web applications.

For example:

  • threaded comments
  • tag clouds
  • user search
  • maintaining record view counts
  • providing undo / revision tracking
  • multi-step wizards

Controversial eh? I reckon the fact that C++ streams use << and >>. I hate it. They are shift operators. Overloading them in this way is plain bad practice. It makes me want to kill whoever came up with that and thought it was a good idea. GRRR.


Software is like toilet paper. The less you spend on it, the bigger of a pain in the ass it is.

That is to say, outsourcing is rarely a good idea.

I've always figured this to be true, but I never really knew the extent of it until recently. I have been "maintaining" (read: "fixing") some off-shored code recently, and it is a huge mess. It is easily costing our company more than the difference had it been developed in-house.

People outside your business will inherently know less about your business model, and therefore will not do as good a job programming any system that works within your business. Also, they know they won't have to support it, so there's no incentive to do anything other than half-ass it.


Java is not the best thing out there. Just because it comes with an 'Enterprise' sticker does not make it good. Nor does it make it fast. Nor does it make it the answer to every question.

Also, ROR is not all it is cracked up to be by the Blogsphere.

While I am at it, OOP is not always good. In fact, I think it is usually bad.


A good developer needs to know more than just how to code


Extension Methods are the work of the Devil

Everyone seems to think that extension methods in .Net are the best thing since sliced bread. The number of developers singing their praises seems to rise by the minute but I'm afraid I can't help but despise them and unless someone can come up with a brilliant justification or example that I haven't already heard then I will never write one. I recently came across this thread and I must say reading the examples of the highest voted extensions made me feel a little like vomiting (metaphorically of course).

The main reasons given for their extensiony goodness are increased readability, improved OO-ness and the ability to chain method calls better.

I'm afraid I have to differ, I find in fact that they, unequivocally, reduce readability and OO-ness by virtue of the fact that they are at their core a lie. If you need a utility method that acts upon an object then write a utility method that acts on that object don't lie to me. When I see aString.SortMeBackwardsUsingKlingonSortOrder then string should have that method because that is telling me something about the string object not something about the AnnoyingNerdReferences.StringUtilities class.

LINQ was designed in such a way that chained method calls are necessary to avoid strange and uncomfortable expressions and the extension methods that arise from LINQ are understandable but in general chained method calls reduce readability and lead to code of the sort we see in obfuscated Perl contests.

So, in short, extension methods are evil. Cast off the chains of Satan and commit yourself to extension free code.


Simplicity Vs Optimality

I believe its very difficult to write code that's both simple and optimal.


"else" is harmful.


My controversial opinion is probably that John Carmack (ID Software, Quake etc.) is not a very good programmer.

Don't get me wrong, he's a very smart programmer in my opinion, but after I noticed the line "#define private public" in the quake sourcecode I couldn't help but think he's a guy that gets the job done nomatter what, but in my definition not a good programmer :) This opinion has gotten me into a lot of heated discussions though ;)


Writing it yourself can be a valid option.

In my experience there seems to be too much enthusiasm when it comes to using 3rd party code to solve a problem. The option of solving the problem by themselves does usually not cross peoples minds. Although don't get me wrong, I am not propagating to never ever use libraries. What I am saying is: among the possible frameworks and modules you are considering to use, add the option of implementing the solution yourself.

But why would you code your own version?

  • Don't reinvent the wheel. But, if you only need a piece of wood, do you really need a whole cart wheel? In other words, do you really need openCV to flip an image along an axis?
  • Compromise. You usually have to make compromises concerning your design, in order to be able to use a specific library. Is the amount of changes you have to incorporate worth the functionality you will receive?
  • Learning. You have to learn to use these new frameworks and modules. How long will it take you? Is it worth your while? Will it take longer to learn than to implement?
  • Cost. Not everything is for free. Although, this includes your time. Consider how much time this software you are about to use will save you and if it is worth it's price? (Also remember that you have to invest time to learn it)
  • You are a programmer, not ... a person who just clicks things together (sorry, couldn't think of anything witty).

The last point is debatable.


Schooling ruins creativity *

*"Ruins" means "potentially ruins"

Granted, schooling is needed! Everyone needs to learn stuff before they can use it - however, all those great ideas you had about how to do a certain strategy for a specific business-field can easily be thrown into that deep brain-void of ours if we aren't careful.

As you learn new things and acquire new skills, you are also boxing your mindset on those new things and skills, since they apparently are "the way to do it". Being humans, we tend to listen to authorities - being it a teacher, a consultant, a co-worker or even a site / forum you like. We should ALWAYS be aware of that "flaw" in how our mind works. Listen to what other people say, but don't take what they say for granted. Always keep a critic point-of-view on every new information you receive.

Instead of thinking "Wow, that's smart. I will use that from now on", we should think "Wow, that's smart. Now, how can I use that in my personal toolbox of skills and ideas".


Although I'm in full favor of Test-Driven Development (TDD), I think there's a vital step before developers even start the full development cycle of prototyping a solution to the problem.

We too often get caught up trying to follow our TDD practices for a solution that may be misdirected because we don't know the domain well enough. Simple prototypes can often elucidate these problems.

Prototypes are great because you can quickly churn through and throw away more code than when you're writing tests first (sometimes). You can then begin the development process with a blank slate but a better understanding.


Arrays should by default be 1-based rather than 0-based. This is not necessarily the case with system implementation languages, but languages like Java swallowed more C oddities than they should have. "Element 1" should be the first element, not the second, to avoid confusion.

Computer science is not software development. You wouldn't hire an engineer who studied only physics, after all.

Learn as much mathematics as is feasible. You won't use most of it, but you need to be able to think that way to be good at software.

The single best programming language yet standardized is Common Lisp, even if it is verbose and has zero-based arrays. That comes largely from being designed as a way to write computations, rather than as an abstraction of a von Neumann machine.

At least 90% of all comparative criticism of programming languages can be reduced to "Language A has feature C, and I don't know how to do C or something equivalent in Language B, so Language A is better."

"Best practices" is the most impressive way to spell "mediocrity" I've ever seen.


Python does everything that other programming languages do in half the dev time... and so does Google!!! Check out Unladen Swallow if you disagree.

Wait, this is a fact. Does it still qualify as an answer to this question?


Usability problems are never the user's fault.

I cannot count how often a problem turned up when some user did something that everybody in the team considered "just a stupid thing to do". Phrases like "why would somebody do that?" or "why doesn't he just do XYZ" usually come up.

Even though many are weary of hearing me say this: if a real-life user tried to do something that either did not work, caused something to go wrong or resulted in unexpected behaviour, then it can be anybody's fault, but not the user's!

Please note that I do not mean people who intentionally misuse the software. I am referring to the presumable target group of the software.


Non-development staff should not be allowed to manage development staff.

Correction: Staff with zero development experience should not be allowed to manage development staff.


You only need 3 to 5 languages to do everything. C is a definite. Maybe assembly but you should know it and be able to use it. Maybe javascript and/or Java if you code for the web. A shell language like bash and one HLL, like Lisp, which might be useful. Anything else is a distraction.


Arrays should by default be 1-based rather than 0-based. This is not necessarily the case with system implementation languages, but languages like Java swallowed more C oddities than they should have. "Element 1" should be the first element, not the second, to avoid confusion.

Computer science is not software development. You wouldn't hire an engineer who studied only physics, after all.

Learn as much mathematics as is feasible. You won't use most of it, but you need to be able to think that way to be good at software.

The single best programming language yet standardized is Common Lisp, even if it is verbose and has zero-based arrays. That comes largely from being designed as a way to write computations, rather than as an abstraction of a von Neumann machine.

At least 90% of all comparative criticism of programming languages can be reduced to "Language A has feature C, and I don't know how to do C or something equivalent in Language B, so Language A is better."

"Best practices" is the most impressive way to spell "mediocrity" I've ever seen.


Good Performance VS Elegant Design

They are not mutually exclusive but I can't stand over-designed class structures/frameworks that completely have no clue about performance. I don't need to have a string of new This(new That(new Whatever())); to create an object that will tell me it's 5 AM in the morning oh by the way, it's 217 days until Obama's birthday, and the weekend is 2 days away. I only wanted to know if the gym was open.

Having balance between the 2 are crucial. The code needs to get nasty when you need to pump out all the processor do something intensive such as reading terabytes of data. Save the elegance for the places that consume the 10% of resources which is probably more than 90% of the code.


The latest design patterns tend to be so much snake oil. As has been said previously in this question, overuse of design patterns can harm a design much more than help it.

If I hear one more person saying that "everyone should be using IOC" (or some similar pile of turd), I think I'll be forced to hunt them down and teach them the error of their ways.


Most developers don't have a clue

Yup .. there you go. I've said it. I find that from all the developers that I personally know .. just a handful are actually good. Just a handful understand that code should be tested ... that the Object Oriented approach to developing is actually there to help you. It frustrates me to no end that there are people who get the title of developer while in fact all they can do is copy and paste a bit of source code and then execute it.

Anyway ... I'm glad initiatives like stackoverflow are being started. It's good for developers to wonder. Is there a better way? Am I doing it correctly? Perhaps I could use this technique to speed things up, etc ...

But nope ... the majority of developers just learn a language that they are required by their job and stick with it until they themselves become old and grumpy developers that have no clue what's going on. All they'll get is a big paycheck since they are simply older than you.

Ah well ... life is unjust in the IT community and I'll be taking steps to ignore such people in the future. Hooray!


Tcl/Tk is the best GUI language/toolkit combo ever

It may lack specific widgets and be less good-looking than the new kids on the block, but its model is elegant and so easy to use that one can build working GUIs faster by typing commands interactively than by using a visual interface builder. Its expressive power is unbeatable: other solutions (Gtk, Java, .NET, MFC...) typically require ten to one hundred LOC to get the same result as a Tcl/Tk one-liner. All without even sacrificing readability or stability.

pack [label .l -text "Hello world!"] [button .b -text "Quit" -command exit]

Junior programmers should be assigned to doing object/ module design and design maintenance for several months before they are allowed to actually write or modify code.

Too many programmers/developers make it to the 5 and 10 year marks without understanding the elements of good design. It can be crippling later when they want to advance beyond just writing and maintaining code.


Opinion: developers should be testing their own code

I've seen too much crap handed off to test only to have it not actually fix the bug in question, incurring communication overhead and fostering irresponsible practices.


You must know how to type to be a programmer.

It's controversial among people who don't know how to type, but who insist that they can two-finger hunt-and-peck as fast as any typist, or that they don't really need to spend that much time typing, or that Intellisense relieves the need to type...

I've never met anyone who does know how to type, but insists that it doesn't make a difference.

See also: Programming's Dirtiest Little Secret


Opinion: SQL is code. Treat it as such

That is, just like your C#, Java, or other favorite object/procedure language, develop a formatting style that is readable and maintainable.

I hate when I see sloppy free-formatted SQL code. If you scream when you see both styles of curly braces on a page, why or why don't you scream when you see free formatted SQL or SQL that obscures or obfuscates the JOIN condition?


If you only know one language, no matter how well you know it, you're not a great programmer.

There seems to be an attitude that says once you're really good at C# or Java or whatever other language you started out learning then that's all you need. I don't believe it- every language I have ever learned has taught me something new about programming that I have been able to bring back into my work with all the others. I think that anyone who restricts themselves to one language will never be as good as they could be.

It also indicates to me a certain lack of inquistiveness and willingness to experiment that doesn't necessarily tally with the qualities I would expect to find in a really good programmer.


There are some (very few) legitimate uses for goto (particularly in C, as a stand-in for exception handling).


Cowboy coders get more done.

I spend my life in the startup atmosphere. Without the Cowboy coders we'd waste endless cycles making sure things are done "right".

As we know it's basically impossible to forsee all issues. The Cowboy coder runs head-on into these problems and is forced to solve them much more quickly than someone who tries to forsee them all.

Though, if you're Cowboy coding you had better refactor that spaghetti before someone else has to maintain it. ;) The best ones I know use continuous refactoring. They get a ton of stuff done, don't waste time trying to predict the future, and through refactoring it becomes maintainable code.

Process always gets in the way of a good Cowboy, no matter how Agile it is.


Opinion: Duration in the development field does not always mean the same as experience.

Many trades look at "years of experience" in a language. Yes, 5 years of C# can make sense since you may learn new tricks and what not. However, if you are with the company and maintaining the same code base for a number of years, I feel as if you are not gaining the amount of exposure to different situations as a person who works on different situations and client needs.

I once interviewed a person who prided himself on having 10 years of programming experience and worked with VB5, 6, and VB.Net...all in the same company during that time. After more probing, I found out that while he worked with all of those versions of VB, he was only upgrading and constantly maintaining his original VB5 app. Never modified the architecture and let the upgrade wizards do their thing. I have interviewed people who only have 2 years in the field but have worked on multiple projects that have more "experience" than him.


Most developers don't have a clue

Yup .. there you go. I've said it. I find that from all the developers that I personally know .. just a handful are actually good. Just a handful understand that code should be tested ... that the Object Oriented approach to developing is actually there to help you. It frustrates me to no end that there are people who get the title of developer while in fact all they can do is copy and paste a bit of source code and then execute it.

Anyway ... I'm glad initiatives like stackoverflow are being started. It's good for developers to wonder. Is there a better way? Am I doing it correctly? Perhaps I could use this technique to speed things up, etc ...

But nope ... the majority of developers just learn a language that they are required by their job and stick with it until they themselves become old and grumpy developers that have no clue what's going on. All they'll get is a big paycheck since they are simply older than you.

Ah well ... life is unjust in the IT community and I'll be taking steps to ignore such people in the future. Hooray!


If your text editor doesn't do good code completion, you're wasting everyone's time.

Quickly remembering thousands of argument lists, spellings, and return values (not to mention class structures and similarly complex organizational patterns) is a task computers are good at and people (comparatively) are not. I buy wholeheartedly that slowing yourself down a bit and avoiding the gadget/feature cult is a great way to increase efficiency and avoid bugs, but there is simply no benefit to spending 30 seconds hunting unnecessarily through sourcecode or docs when you could spend nil... especially if you just need a spelling (which is more often than we like to admit).

Granted, if there isn't an editor that provides this functionality for your language, or the task is simple enough to knock out in the time it would take to load a heavier editor, nobody is going to tell you that Eclipse and 90 plugins is the right tool. But please don't tell me that the ability to H-J-K-L your way around like it's 1999 really saves you more time than hitting escape every time you need a method signature... even if you do feel less "hacker" doing it.

Thoughts?


Correct every defect when it's discovered. Not just "severity 1" defects; all defects.

Establish a deployment mechanism that makes application updates immediately available to users, but allows them to choose when to accept these updates. Establish a direct communication mechanism with users that enables them to report defects, relate their experience with updates, and suggest improvements.

With aggressive testing, many defects can be discovered during the iteration in which they are created; immediately correcting them reduces developer interrupts, a significant contributor to defect creation. Immediately correcting defects reported by users forges a constructive community, replacing product quality with product improvement as the main topic of conversation. Implementing user-suggested improvements that are consistent with your vision and strategy produces community of enthusiastic evangelists.


Programming is in its infancy.

Even though programming languages and methodologies have been evolving very quickly for years now, we still have a long way to go. The signs are clear:

  1. Language Documentation is spread haphazardly across the internet (stackoverflow is helping here).

  2. Languages cannot evolve syntactically without breaking prior versions.

  3. Debugging is still often done with printf.

  4. Language libraries or other forms of large scale code reuse are still pretty rare.

Clearly all of these are improving, but it would be nice if we all could agree that this is the beginning and not the end=).


Greater-than operators (>, >=) should be deprecated

I tried coding with a preference for less-than over greater-than for awhile and it stuck! I don't want to go back, and indeed I feel that everyone should do it my way in this case.

Consider common mathematical 'range' notation: 0 <= i < 10

That's easy to approximate in code now and you get used to seeing the idiom where the variable is repeated in the middle joined by &&:

if (0 <= i && i < 10)
    return true;
else
    return false;

Once you get used to that pattern, you'll never look at silliness like

if ( ! (i < 0 || i >= 9))
    return true;

the same way again.

Long sequences of relations become a bit easier to work with because the operands tend towards nondecreasing order.

Furthermore, a preference for operator< is enshrined in the C++ standards. In some cases operator= is defined in terms of it! (as !(a<b || b<a))


One I have been tossing around for a while:

The data is the system.

Processes and software are built for data, not the other way around.

Without data, the process/software has little value. Data still has value without a process or software around it.

Once we understand the data, what it does, how it interacts, the different forms it exists in at different stages, only then can a solution be built to support the system of data.

Successful software/systems/processes seem to have an acute awareness, if not fanatical mindfulness of "where" the data is at in any given moment.


The customer is not always right.

In most cases that I deal with, the customer is the product owner, aka "the business". All too often, developers just code and do not try to provide a vested stake in the product. There is too much of a misconception that the IT Department is a "company within a company", which is a load of utter garbage.

I feel my role is that of helping the business express their ideas - with the mutual understanding that I take an interest in understanding the business so that I can provide the best experience possible. And that route implies that there will be times that the product owner asks for something that he/she feels is the next revolution in computing leaving someone to either agree with that fact, or explain the more likely reason of why no one does something a certain way. It is mutually beneficial, because the product owner understands the thought that goes into the product, and the development team understands that they do more than sling code.

This has actually started to lead us down the path of increased productivity. How? Since the communication has improved due to disagreements on both sides of the table, it is more likely that we come together earlier in the process and come to a mutually beneficial solution to the product definition.


Excessive HTML in PHP files: sometimes necessary

Excessive Javascript in PHP files: trigger the raptor attack

While I have a hard time figuring out all your switching between echoing and ?>< ?php 'ing html (after all, php is just a processor for html), lines and lines of javascript added in make it a completely unmaintainable mess.

People have to grasp this: They are two separate programming languages. Pick one to be your primary language. Then go on and find a quick, clean and easily maintainable way to make your primary include the secondary language.

The reason why you jump between PHP, Javascript and HTML all the time is because you are bad at all three of them.

Ok, maybe its not exactly controversial. I had the impression this was a general frustration venting topic :)


There are some (very few) legitimate uses for goto (particularly in C, as a stand-in for exception handling).


There is a difference between a programmer and a developer. An example: a programmer writes pagination logic, a developer integrates pagination on a page.


If a developer cannot write clear, concise and grammatically correct comments then they should have to go back and take English 101.

We have developers and (the horror) architects who cannot write coherently. When their documents are reviewed they say things like "oh, don't worry about grammatical errors or spelling - that's not important". Then they wonder why their convoluted garbage documents become convoluted buggy code.

I tell the interns that I mentor that if you can't communicate your great ideas verbally or in writing you may as well not have them.


"Java Sucks" - yeah, I know that opinion is definitely not held by all :)

I have that opinion because the majority of Java applications I've seen are memory hogs, run slowly, horrible user interface and so on.

G-Man


I'd say that my most controversial opinion on programming is that I honestly believe you shouldn't worry so much about throw-away code and rewriting code. Too many times people feel that if you write something down, then changing it means you did something wrong. But the way my brain works is to get something very simple working, and update the code slowly, while ensuring that the code and the test continue to function together. It may end up actually creating classes, methods, additional parameters, etc., I fully well know will go away in a few hours. But I do it because i want to take only small steps toward my goal. In the end, I don't think I spend any more time using this technique than the programmers that stare at the screen trying to figure out the best design up front before writing a line of code.

The benefit I get is that I'm not having to constantly deal with software that no longer works because I happen to break it somehow and am trying to figure out what stopped working and why.


Once i saw the following from a co-worker:

equal = a.CompareTo(b) == 0;

I stated that he cannot assume that in a general case, but he just laughed.


Realizing sometimes good enough is good enough, is a major jump in your value as a programmer.

Note that when I say 'good enough', I mean 'good enough', not it's some crap that happens to work. But then again, when you are under a time crunch, 'some crap that happens to work', may be considered 'good enough'.


I hate universities and institutes offering short courses for teaching programming to new comers. It is outright disgrace and contempt for the art1 and science of programming.

They start teaching C, Java, VB (disgusting) to the people without good grasp on hardware and fundamental principals of computers. The should first be taught about the MACHINE by books like Morris Mano's Computer System Architecture and then taught the concept of instructing machine to solve problems instead of etching semantics and syntax of one programming language.

Also I don't understand government schools, colleges teaching children basics of computers using commercial operating systems and softwares. At least in my country (India) not many students afford to buy operating systems and even discounted office suits let alone the development software juggernaut (compilers, IDEs etc). This prompts theft and piracy and make this act of copying and stealing software from their institutes' libraries a justified act.

Again they are taught to use some products not the fundamental ideas.

Think about it if you were taught only that 2x2 is 4 and not the concept of multiplication?

Or if you were taught now to measure the length of pole inclined to some compound wall of your school but not the Pythagoras theorem


Cowboy coders get more done.

I spend my life in the startup atmosphere. Without the Cowboy coders we'd waste endless cycles making sure things are done "right".

As we know it's basically impossible to forsee all issues. The Cowboy coder runs head-on into these problems and is forced to solve them much more quickly than someone who tries to forsee them all.

Though, if you're Cowboy coding you had better refactor that spaghetti before someone else has to maintain it. ;) The best ones I know use continuous refactoring. They get a ton of stuff done, don't waste time trying to predict the future, and through refactoring it becomes maintainable code.

Process always gets in the way of a good Cowboy, no matter how Agile it is.


If you have any idea how to program you are not fit to place a button on a form

Is that controversial enough? ;)

No matter how hard we try, it's almost impossible to have appropriate empathy with 53 year old Doris who has to use our order-entry software. We simply cannot grasp the mental model of what she imagines is going on inside the computer, because we don't need to imagine: we know whats going on, or have a very good idea.

Interaction Design should be done by non-programmers. Of course, this is never actually going to happen. Contradictorily I'm quite glad about that; I like UI design even though deep down I know I'm unsuited to it.

For further info, read the book The Inmates Are Running the Asylum. Be warned, I found this book upsetting and insulting; it's a difficult read if you are a developer that cares about the user's experience.


As most others here, I try to adhere to principles like DRY and not being a human compiler.

Another strategy I want to push is "tell, don't ask". Instead of cluttering all objects with getters/setters essentially making a sieve of them, I'd like to tell them to do stuff.

This seems to got straight against good enterprise practices with dumb entity objects and thicker service layer(that does plenty of asking). Hmmm, thoughts?


Zealous adherence to standards stands in the way of simplicity.

MVC is over-rated for websites. It's mostly just VC, sometimes M.


Controversial to self, because some things are better be left unsaid, so you won't be painted by others as too egotist. However, here it is:

If it is to be, it begins with me


Write small methods. It seems that programmers love to write loooong methods where they do multiple different things.

I think that a method should be created wherever you can name one.


Human brain is the master key to all locks.

There is nothing in this world that can move faster your brain. Trust me this is not philosophical but practical. Well as far as opinions are concerned , they are as under


1) Never go outside the boundry specified in the programming language, A simple example would be pointers in C and C++. Dont misuse them as you are likely to get the DAMN SEGMENTATION FAULT.

2) Always follow the coding standards, yes what you are reading is correct, Coding standards do alot to your program, After all your program is written to be executed by machine but to be understood by some other brain :)


The users aren't idiots -- you are.

So many times I've heard developers say "so-and-so is an idiot" and my response is typically "he may be an idiot but you allowed him to be one."


You must know C to be able to call yoursel a programmer!


Java is not the best thing out there. Just because it comes with an 'Enterprise' sticker does not make it good. Nor does it make it fast. Nor does it make it the answer to every question.

Also, ROR is not all it is cracked up to be by the Blogsphere.

While I am at it, OOP is not always good. In fact, I think it is usually bad.


The more process you put around programming, the worse the code becomes

I have noticed something in my 8 or so years of programming, and it seems ridiculous. It's that the only way to get quality is to employ quality developers, and remove as much process and formality from them as you can. Unit testing, coding standards, code/peer reviews, etc only reduce quality, not increase it. It sounds crazy, because the opposite should be true (more unit testing should lead to better code, great coding standards should lead to more readable code, code reviews should improve the quality of code) but it's not.

I think it boils down to the fact we call it "Software Engineering" when really it's design and not engineering at all.


Some numbers to substantiate this statement:

From the Editor

IEEE Software, November/December 2001

Quantifying Soft Factors

by Steve McConnell

...

Limited Importance of Process Maturity

... In comparing medium-size projects (100,000 lines of code), the one with the worst process will require 1.43 times as much effort as the one with the best process, all other things being equal. In other words, the maximum influence of process maturity on a project’s productivity is 1.43. ...

... What Clark doesn’t emphasize is that for a program of 100,000 lines of code, several human-oriented factors influence productivity more than process does. ...

... The seniority-oriented factors alone (AEXP, LTEX, PEXP) exert an influence of 3.02. The seven personnel-oriented factors collectively (ACAP, AEXP, LTEX, PCAP, PCON, PEXP, and SITE §) exert a staggering influence range of 25.8! This simple fact accounts for much of the reason that non-process-oriented organizations such as Microsoft, Amazon.com, and other entrepreneurial powerhouses can experience industry-leading productivity while seemingly shortchanging process. ...

The Bottom Line

... It turns out that trading process sophistication for staff continuity, business domain experience, private offices, and other human-oriented factors is a sound economic tradeoff. Of course, the best organizations achieve high motivation and process sophistication at the same time, and that is the key challenge for any leading software organization.

§ Read the article for an explanation of these acronyms.


Classes should fit on the screen.

If you have to use the scroll bar to see all of your class, your class is too big.

Code folding and miniature fonts are cheating.


Design Patterns are a symptom of Stone Age programming language design

They have their purpose. A lot of good software gets built with them. But the fact that there was a need to codify these "recipes" for psychological abstractions about how your code works/should work speaks to a lack of programming languages expressive enough to handle this abstraction for us.

The remedy, I think, lies in languages that allow you to embed more and more of the design into the code, by defining language constructs that might not exist or might not have general applicability but really really make sense in situations your code deals with incessantly. The Scheme people have known this for years, and there are things possible with Scheme macros that would make most monkeys-for-hire piss their pants.


Performance does matter.


Premature optimization is NOT the root of all evil! Lack of proper planning is the root of all evil.

Remember the old naval saw

Proper Planning Prevents P*ss Poor Performance!


If your text editor doesn't do good code completion, you're wasting everyone's time.

Quickly remembering thousands of argument lists, spellings, and return values (not to mention class structures and similarly complex organizational patterns) is a task computers are good at and people (comparatively) are not. I buy wholeheartedly that slowing yourself down a bit and avoiding the gadget/feature cult is a great way to increase efficiency and avoid bugs, but there is simply no benefit to spending 30 seconds hunting unnecessarily through sourcecode or docs when you could spend nil... especially if you just need a spelling (which is more often than we like to admit).

Granted, if there isn't an editor that provides this functionality for your language, or the task is simple enough to knock out in the time it would take to load a heavier editor, nobody is going to tell you that Eclipse and 90 plugins is the right tool. But please don't tell me that the ability to H-J-K-L your way around like it's 1999 really saves you more time than hitting escape every time you need a method signature... even if you do feel less "hacker" doing it.

Thoughts?


Programmers should never touch Word (or PowerPoint)

Unless you are developing a word or a document processing tool, you should not touch a Word processor that emits only binary blobs, and for that matter:

Generated XML files are binary blobs

Programmers should write plain text documents. The documents a programmer writes need to convey intention only, not formatting. It must be producible with the programming tool-chain: editor, version-control, search utilities, build system and the like. When you are already have and know how to use that tool-chain, every other document production tool is a horrible waste of time and effort.

When there is a need to produce a document for non-programmers, a lightweight markup language should be used such as reStructuredText (if you are writing a plain text file, you are probably writing your own lightweight markup anyway), and generate HTML, PDF, S5, etc. from it.


Haven't tested it yet for controversy, but there may be potential:

The best line of code is the one you never wrote.


Social skills matter more than technical skills

Agreable but average programmers with good social skills will have a more successful carreer than outstanding programmers who are disagreable people.


This one is mostly web related but...

Use Tables for your web page layouts

If I was developing a gigantic site that needed to squeeze performance I might think about it, but nothing gives me an easier way to get a consistent look out on the browser than tables. The majority of applications that I develop are for around 100-1000 users and possible 100 at a time max. The extra bloat of the tables aren't killing my server by any means.


Sometimes jumping on the bandwagon is ok

I get tired of people exhibiting "grandpa syndrome" ("You kids and your newfangled Test Driven Development. Every big technology that's come out in the last decade has sucked. Back in my day, we wrote real code!"... you get the idea).

Sometimes things that are popular are popular for a reason.


That best practices are a hazard because they ask us to substitute slogans for thinking.


I think that using regions in C# is totally acceptable to collapse your code while in VS. Too many people try to say it hides your code and makes it hard to find things. But if you use them properly they can be very helpful to identify sections of code.


We're software developers, not C/C#/C++/PHP/Perl/Python/Java/... developers.

After you've been exposed to a few languages, picking up a new one and being productive with it is a small task. That is to say that you shouldn't be afraid of new languages. Of course, there is a large difference between being productive and mastering a language. But, that's no reason to shy away from a language you've never seen. It bugs me when people say, "I'm a PHP developer." or when a job offer says, "Java developer". After a few years experience of being a developer, new languages and APIs really shouldn't be intimidating and going from never seeing a language to being productive with it shouldn't take very long at all. I know this is controversial but it's my opinion.


I think its fine to use goto-statements, if you use them in a sane way (and a sane programming language). They can often make your code a lot easier to read and don't force you to use some twisted logic just to get one simple thing done.


MS Access* is a Real Development Tool and it can be used without shame by professional programmers

Just because a particular platform is a magnet for hacks and secretaries who think they are programmers shouldn't besmirch the platform itself. Every platform has its benefits and drawbacks.

Programmers who bemoan certain platforms or tools or belittle them as "toys" are more likely to be far less knowledgable about their craft than their ego has convinced them they are. It is a definite sign of overconfidence for me to hear a programmer bash any environment that they have not personally used extensively enough to know well.

* Insert just about any maligned tool (VB, PHP, etc.) here.


If it isn't worth testing, it isn't worth building


XHTML is evil. Write HTML

You will have to set the MIME type to text/html anyway, so why fooling yourself into believing that you are really writing XML? Whoever is going to download your page is going to believe that it is HTML, so make it HTML.

And with that, feel free and happy to not close your <li>, it isn't necessary. Don't close the html tag, the file is over anyway. It is valid HTML and it can be parsed perfectly.

It will create more readable, less boilerplate code and you don't lose a thing. HTML parsers work good!

And when you are done, move on to HTML5. It is better.


That (at least during initial design), every Database Table (well, almost every one) should be clearly defined to contain some clearly understanable business entity or system-level domain abstraction, and that whether or not you use it as a a primary key and as Foreign Keys in other dependant tables, some column (attribute) or subset of the table attributes should be clearly defined to represent a unique key for that table (entity/abstraction). This is the only way to ensure that the overall table structure represents a logically consistent representation of the complete system data structure, without overlap or misunbderstood flattening. I am a firm believeer in using non-meaningful surrogate keys for Pks and Fks and join functionality, (for performance, ease of use, and other reasons), but I beleive the tendency in this direction has taken the database community too far away from the original Cobb principles, and we jhave lost much of the benefits (of database consistency) that natural keys provided.

So why not use both?


I work in ASP.NET / VB.NET a lot and find ViewState an absolute nightmare. It's enabled by default on the majority of fields and causes a large quantity of encoded data at the start of every web page. The bigger a page gets in terms of controls on a page, the larger the ViewState data will become. Most people don't turn an eye to it, but it creates a large set of data which is usually irrelevant to the tasks being carried on the page. You must manually disable this option on all ASP controls if they're not being used. It's either that or have custom controls for everything.

On some pages I work with, half of the page is made up of ViewState, which is a shame really as there's probably better ways of doing it.

That's just one small example I can think of in terms of language/technology opinions. It may be controversial.

By the way, you might want to edit voting on this thread, it could get quite heated by some ;)


Opinion: That frameworks and third part components should be only used as a last resort.

I often see programmers immediately pick a framework to accomplish a task without learning the underlying approach it takes to work. Something will inevitably break, or we'll find a limition we didn't account for and we'll be immediately stuck and have to rethink major part of a system. Frameworks are fine to use as long it is carefully thought out.


Developers should be able to modify production code without getting permission from anyone as long as they document their changes and notify the appropriate parties.


1. You should not follow web standards - all the time.

2. You don't need to comment your code.

As long as it's understandable by a stranger.


Pagination is never what the user wants

If you start having the discussion about where to do pagination, in the database, in the business logic, on the client, etc. then you are asking the wrong question. If your app is giving back more data than the user needs, figure out a way for the user to narrow down what they need based on real criteria, not arbitrary sized chunks. And if the user really does want all those results, then give them all the results. Who are you helping by giving back 20 at a time? The server? Is that more important than your user?

[EDIT: clarification, based on comments]

As a real world example, let's look at this Stack Overflow question. Let's say I have a controversial programming opinion. Before I post, I'd like to see if there is already an answer that addresses the same opinion, so I can upvote it. The only option I have is to click through every page of answers.

I would prefer one of these options:

  1. Allow me to search through the answers (a way for me to narrow down what I need based on real criteria).

  2. Allow me to see all the answers so I can use my browser's "find" option (give me all the results).

The same applies if I just want to find an answer I previously read, but can't find anymore. I don't know when it was posted or how many votes it has, so the sorting options don't help. And even if I did, I still have to play a guessing game to find the right page of results. The fact that the answers are paginated and I can directly click into one of a dozen pages is no help at all.

--
bmb


Managers know everything

It's been my experience that managers didn't get there by knowing code usually. No matter what you tell them it's too long, not right or too expensive.

And another that follows on from the first:

There's never time to do it right but there's always time to do it again

A good engineer friend once said that in anger to describe a situation where management halved his estimates, got a half-assed version out of him then gave him twice as much time to rework it because it failed. It's a fairly regular thing in the commercial software world.

And one that came to mind today while trying to configure a router with only a web interface:

Web interfaces are for suckers

The CLI on the previous version of the firmware was oh so nice. This version has a web interface, which attempts to hide all of the complexity of networking from clueless IT droids, and can't even get VLANs correct.


You don't have to program everything

I'm getting tired that everything, but then everything needs to be stuffed in a program, like that is always faster. everything needs to be webbased, evrything needs to be done via a computer. Please, just use your pen and paper. it's faster and less maintenance.


A Developer should never test their own software

Development and testing are two diametrically opposed disciplines. Development is all about construction, and testing is all about demolition. Effective testing requires a specific mindset and approach where you are trying to uncover developer mistakes, find holes in their assumptions, and flaws in their logic. Most people, myself included, are simply unable to place themselves and their own code under such scrutiny and still remain objective.


"Googling it" is okay!

Yes, I know it offends some people out there that their years of intense memorization and/or glorious stacks of programming books are starting to fall by the wayside to a resource that anyone can access within seconds, but you shouldn't hold that against people that use it.

Too often I hear googling answers to problems the result of criticism, and it really is without sense. First of all, it must be conceded that everyone needs materials to reference. You don't know everything and you will need to look things up. Conceding that, does it really matter where you got the information? Does it matter if you looked it up in a book, looked it up on Google, or heard it from a talking frog that you hallucinated? No. A right answer is a right answer.

What is important is that you understand the material, use it as the means to an end of a successful programming solution, and the client/your employer is happy with the results.

(although if you are getting answers from hallucinatory talking frogs, you should probably get some help all the same)


Globals and/or Singletons are not inherently evil

I come from more of a sysadmin, shell, Perl (and my "real" programming), PHP type background; last year I was thrown into a Java development gig.

Singletons are evil. Globals are so evil they are not even allowed. Yet, Java has things like AOP, and now various "Dependency Injection" frameworks (we used Google Guice). AOP less so, but DI things for sure give you what? Globals. Uhh, thanks.


Source files are SO 20th century.

Within the body of a function/method, it makes sense to represent procedural logic as linear text. Even when the logic is not strictly linear, we have good programming constructs (loops, if statements, etc) that allow us to cleanly represent non-linear operations using linear text.

But there is no reason that I should be required to divide my classes among distinct files or sort my functions/methods/fields/properties/etc in a particular order within those files. Why can't we just throw all those things within a big database file and let the IDE take care of sorting everything dynamically? If I want to sort my members by name then I'll click the member header on the members table. If I want to sort them by accessibility then I'll click the accessibility header. If I want to view my classes as an inheritence tree, then I'll click the button to do that.

Perhaps classes and members could be viewed spatially, as if they were some sort of entities within a virtual world. If the programmer desired, the IDE could automatically position classes & members that use each other near each other so that they're easy to find. Imaging being able to zoom in and out of this virtual world. Zoom all the way out and you can namespace galaxies with little class planets in them. Zoom in to a namespace and you can see class planets with method continents and islands and inner classes as orbitting moons. Zoom in to a method, and you see... the source code for that method.

Basically, my point is that in modern languages it doesn't matter what file(s) you put your classes in or in what order you define a class's members, so why are we still forced to use these archaic practices? Remember when Gmail came out and Google said "search, don't sort"? Well, why can't the same philosophy be applied to programming languages?


QA can be done well, over the long haul, without exploring all forms of testing

Lots of places seem to have an "approach", how "we do it". This seems to implicitly exclude other approaches.

This is a serious problem over the long term, because the primary function of QA is to file bugs -and- get them fixed.

You cannot do this well if you are not finding as many bugs as possible. When you exclude methodologies, for example, by being too black-box dependent, you start to ignore entire classes of discoverable coding errors. That means, by implication, you are making entire classes of coding errors unfixable, except when someone else stumbles on it.

The underlying problem often seems to be management + staff. Managers with this problem seem to have narrow thinking about the computer science and/or the value proposition of their team. They tend to create teams that reflect their approach, and a whitelist of testing methods.

I am not saying you can or should do everything all the time. Lets face it, some test methods are simply going to be a waste of time for a given product. And some methodologies are more useful at certain levels of product maturity. But what I think is missing is the ability of testing organizations to challenge themselves to learn new things, and apply that to their overall performance.

Here's a hypothetical conversation that would sum it up:

Me: You tested that startup script for 10 years, and you managed to learn NOTHING about shell scripts and how they work?!

Tester: Yes.

Me: Permissions?

Tester: The installer does that

Me: Platform, release-specific dependencies?

Tester: We file bugs for that

Me: Error handling?

Tester: when errors happen to customer support sends us some info.

Me: Okay...(starts thinking about writing post in stackoverflow...)


Getters and Setters are Highly Overused

I've seen millions of people claiming that public fields are evil, so they make them private and provide getters and setters for all of them. I believe this is almost identical to making the fields public, maybe a bit different if you're using threads (but generally is not the case) or if your accessors have business/presentation logic (something 'strange' at least).

I'm not in favor of public fields, but against making a getter/setter (or Property) for everyone of them, and then claiming that doing that is encapsulation or information hiding... ha!

UPDATE:

This answer has raised some controversy in it's comments, so I'll try to clarify it a bit (I'll leave the original untouched since that is what many people upvoted).

First of all: anyone who uses public fields deserves jail time

Now, creating private fields and then using the IDE to automatically generate getters and setters for every one of them is nearly as bad as using public fields.

Many people think:

private fields + public accessors == encapsulation

I say (automatic or not) generation of getter/setter pair for your fields effectively goes against the so called encapsulation you are trying to achieve.

Lastly, let me quote Uncle Bob in this topic (taken from chapter 6 of "Clean Code"):

There is a reason that we keep our variables private. We don't want anyone else to depend on them. We want the freedom to change their type or implementation on a whim or an impulse. Why, then, do so many programmers automatically add getters and setters to their objects, exposing their private fields as if they were public?


In my workplace, I've been trying to introduce more Agile/XP development habits. Continuous Design is the one I've felt most resistance on so far. Maybe I shouldn't have phrased it as "let's round up all of the architecture team and shoot them"... ;)


Two lines of code is too many.

If a method has a second line of code, it is a code smell. Refactor.


  • Soon we are going to program in a world without databases.

  • AOP and dependency injection are the GOTO of the 21st century.

  • Building software is a social activity, not a technical one.

  • Joel has a blog.


Programmers who don't code in their spare time for fun will never become as good as those that do.

I think even the smartest and most talented people will never become truly good programmers unless they treat it as more than a job. Meaning that they do little projects on the side, or just mess with lots of different languages and ideas in their spare time.

(Note: I'm not saying good programmers do nothing else than programming, but they do more than program from 9 to 5)


If I were being controversial, I'd have to suggest that Jon Skeet isn't omnipotent..


The use of hungarian notation should be punished with death.

That should be controversial enough ;)


Opinion: Duration in the development field does not always mean the same as experience.

Many trades look at "years of experience" in a language. Yes, 5 years of C# can make sense since you may learn new tricks and what not. However, if you are with the company and maintaining the same code base for a number of years, I feel as if you are not gaining the amount of exposure to different situations as a person who works on different situations and client needs.

I once interviewed a person who prided himself on having 10 years of programming experience and worked with VB5, 6, and VB.Net...all in the same company during that time. After more probing, I found out that while he worked with all of those versions of VB, he was only upgrading and constantly maintaining his original VB5 app. Never modified the architecture and let the upgrade wizards do their thing. I have interviewed people who only have 2 years in the field but have worked on multiple projects that have more "experience" than him.


The best programmers trace all their code in the debugger and test all paths.

Well... the OP said controversial!


Don't write code, remove code!

As a smart teacher once told me: "Don't write code, Writing code is bad, Removing code is good. and if you have to write code - write small code..."


Explicit self in Python's method declarations is poor design choice.

Method calls got syntactic sugar, but declarations didn't. It's a leaky abstraction (by design!) that causes annoying errors, including runtime errors with apparent off-by-one error in reported number of arguments.


"Programmers must do programming on the side, or they're never as good as those who do."

As kpollock said, imagine saying that for doctors, or soldiers...

The main thing isn't so much as whether they code, but whether they think about it. Computing Science is an intellectual exercise, you don't necessarily need to code to think about problems that makes you better as a programmer.

It's not like Einstein gets to play with play with particles and waves when he's off his research.


I'm probably gonna get roasted for this, but:

Making invisible characters syntactically significant in python was a bad idea

It's distracting, causes lots of subtle bugs for novices and, in my opinion, wasn't really needed. About the only code I've ever seen that didn't voluntarily follow some sort of decent formatting guide was from first-year CS students. And even if code doesn't follow "nice" standards, there are plenty of tools out there to coerce it into a more pleasing shape.


Sometimes it's appropriate to swallow an exception.

For UI bells and wistles, prompting the user with an error message is interuptive, and there is ussually nothing for them to do anyway. In this case, I just log it, and deal with it when it shows up in the logs.


Explicit self in Python's method declarations is poor design choice.

Method calls got syntactic sugar, but declarations didn't. It's a leaky abstraction (by design!) that causes annoying errors, including runtime errors with apparent off-by-one error in reported number of arguments.


Source files are SO 20th century.

Within the body of a function/method, it makes sense to represent procedural logic as linear text. Even when the logic is not strictly linear, we have good programming constructs (loops, if statements, etc) that allow us to cleanly represent non-linear operations using linear text.

But there is no reason that I should be required to divide my classes among distinct files or sort my functions/methods/fields/properties/etc in a particular order within those files. Why can't we just throw all those things within a big database file and let the IDE take care of sorting everything dynamically? If I want to sort my members by name then I'll click the member header on the members table. If I want to sort them by accessibility then I'll click the accessibility header. If I want to view my classes as an inheritence tree, then I'll click the button to do that.

Perhaps classes and members could be viewed spatially, as if they were some sort of entities within a virtual world. If the programmer desired, the IDE could automatically position classes & members that use each other near each other so that they're easy to find. Imaging being able to zoom in and out of this virtual world. Zoom all the way out and you can namespace galaxies with little class planets in them. Zoom in to a namespace and you can see class planets with method continents and islands and inner classes as orbitting moons. Zoom in to a method, and you see... the source code for that method.

Basically, my point is that in modern languages it doesn't matter what file(s) you put your classes in or in what order you define a class's members, so why are we still forced to use these archaic practices? Remember when Gmail came out and Google said "search, don't sort"? Well, why can't the same philosophy be applied to programming languages?


Code Generation is bad

I hate languages that require you to make use of code generation (or copy&paste) for simple things, like JavaBeans with all their Getters and Setters.

C#'s AutoProperties are a step in the right direction, but for nice DTOs with Fields, Properties and Constructor parameters you still need a lot of redundancy.


All project managers should be required to have coding tasks

In teams that I have worked where the project manager was actually a programmer who understood the technical issues of the code well enough to accomplish coding tasks, the decisions that were made lacked the communication disconnect that often happens in teams where the project manager is not involved in the code.


Tools, Methodology, Patterns, Frameworks, etc. are no substitute for a properly trained programmer

I'm sick and tired of dealing with people (mostly managers) who think that the latest tool, methodology, pattern or framework is a silver bullet that will eliminate the need for hiring experienced developers to write their software. Although, as a consultant who makes a living rescuing at-risk projects, I shouldn't complain.


You can't measure productivity by counting lines of code.

Everyone knows this, but for some reason the practice still persists!


The class library guidelines for implementing IDisposable are wrong.

I don't share this too often, but I believe that the guidance for the default implementation for IDisposable is completely wrong.

My issue isn't with the overload of Dispose and then removing the item from finalization, but rather, I despise how there is a call to release the managed resources in the finalizer. I personally believe that an exception should be thrown (and yes, with all the nastiness that comes from throwing it on the finalizer thread).

The reasoning behind it is that if you are a client or server of IDisposable, there is an understanding that you can't simply leave the object lying around to be finalized. If you do, this is a design/implementation flaw (depending on how it is left lying around and/or how it is exposed), as you are not aware of the lifetime of instances that you should be aware of.

I think that this type of bug/error is on the level of race conditions/synchronization to resources. Unfortunately, with calling the overload of Dispose, that error is never materialized.

Edit: I've written a blog post on the subject if anyone is interested:

http://www.caspershouse.com/post/A-Better-Implementation-Pattern-for-IDisposable.aspx


Uncommented code is the bane of humanity.

I think that comments are necessary for code. They visually divide it up into logical parts, and provide an alternative representation when reading code.

Documentation comments are the bare minimum, but using comments to split up longer functions helps when writing new code and allows quicker analysis when returning to existing code.


UML diagrams are highly overrated

Of course there are useful diagrams e.g. class diagram for the Composite Pattern, but many UML diagrams have absolutely no value.


That most language proponents make a lot of noise.


My one:

Long switch statements are your friends. Really. At least in C#.

People tend to avoid and discourage others to use long switch statements beause they are "unmanagable" and "have bad performance characteristics".

Well, the thing is that in C#, switch statements are always compiled automagically to hash jump tables so actually using them is the Best Thing To Do™ in terms of performance if you need simple branching to multiple branches. Also, if the case statements are organized and grouped intelligently (for example in alphabetical order), they are not unmanageable at all.


Architects that do not code are useless.

That sounds a little harsh, but it's not unreasonable. If you are the "architect" for a system, but do not have some amount of hands-on involvement with the technologies employed then how do you get the respect of the development team? How do you influence direction?

Architects need to do a lot more (meet with stakeholders, negotiate with other teams, evaluate vendors, write documentation, give presentations, etc.) But, if you never see code checked in from by your architect... be wary!


Before January 1st 1970, true and false were the other way around...


in almost all cases, comments are evil: http://gooddeveloper.wordpress.com/


C++ is one of the WORST programming languages - EVER.

It has all of the hallmarks of something designed by committee - it does not do any given job well, and does some jobs (like OO) terribly. It has a "kitchen sink" desperation to it that just won't go away.

It is a horrible "first language" to learn to program with. You get no elegance, no assistance (from the language). Instead you have bear traps and mine fields (memory management, templates, etc.).

It is not a good language to try to learn OO concepts. It behaves as "C with a class wrapper" instead of a proper OO language.

I could go on, but will leave it at that for now. I have never liked programming in C++, and although I "cut my teeth" on FORTRAN, I totally loved programming in C. I still think C was one of the great "classic" languages. Something that C++ is certainly NOT, in my opinion.

Cheers,

-R

EDIT: To respond to the comments on teaching C++. You can teach C++ in two ways - either teaching it as C "on steroids" (start with variables, conditions, loops, etc), or teaching it as a pure "OO" language (start with classes, methods, etc). You can find teaching texts that use one or other of these approaches. I prefer the latter approach (OO first) as it does emphasize the capabilities of C++ as an OO language (which was the original design emphasis of C++). If you want to teach C++ "as C", then I think you should teach C, not C++.

But the problem with C++ as a first language in my experience is that the language is simply too BIG to teach in one semester, plus most "intro" texts try and cover everything. It is simply not possible to cover all the topics in a "first language" course. You have to at least split it into 2 semesters, and then it's no longer "first language", IMO.

I do teach C++, but only as a "new language" - that is, you must be proficient in some prior "pure" language (not scripting or macros) before you can enroll in the course. C++ is a very fine "second language" to learn, IMO.

-R

'Nother Edit: (to Konrad)

I do not at all agree that C++ "is superior in every way" to C. I spent years coding C programs for microcontrollers and other embedded applications. The C compilers for these devices are highly optimized, often producing code as good as hand-coded assembler. When you move to C++, you gain a tremendous overhead imposed by the compiler in order to manage language features you may not use. In embedded applications, you gain little by adding classes and such, IMO. What you need is tight, clean code. You can write it in C++, but then you're really just writing C, and the C compilers are more optimized in these applications.

I wrote a MIDI engine, first in C, later in C++ (at the vendor's request) for an embedded controller (sound card). In the end, to meet the performance requirements (MIDI timings, etc) we had to revert to pure C for all of the core code. We were able to use C++ for the high-level code, and having classes was very sweet - but we needed C to get the performance at the lower level. The C code was an order of magnitude faster than the C++ code, but hand coded assembler was only slightly faster than the compiled C code. This was back in the early 1990s, just to place the events properly.

-R


Greater-than operators (>, >=) should be deprecated

I tried coding with a preference for less-than over greater-than for awhile and it stuck! I don't want to go back, and indeed I feel that everyone should do it my way in this case.

Consider common mathematical 'range' notation: 0 <= i < 10

That's easy to approximate in code now and you get used to seeing the idiom where the variable is repeated in the middle joined by &&:

if (0 <= i && i < 10)
    return true;
else
    return false;

Once you get used to that pattern, you'll never look at silliness like

if ( ! (i < 0 || i >= 9))
    return true;

the same way again.

Long sequences of relations become a bit easier to work with because the operands tend towards nondecreasing order.

Furthermore, a preference for operator< is enshrined in the C++ standards. In some cases operator= is defined in terms of it! (as !(a<b || b<a))


If you only know one language, no matter how well you know it, you're not a great programmer.

There seems to be an attitude that says once you're really good at C# or Java or whatever other language you started out learning then that's all you need. I don't believe it- every language I have ever learned has taught me something new about programming that I have been able to bring back into my work with all the others. I think that anyone who restricts themselves to one language will never be as good as they could be.

It also indicates to me a certain lack of inquistiveness and willingness to experiment that doesn't necessarily tally with the qualities I would expect to find in a really good programmer.


SESE (Single Entry Single Exit) is not law

Example:

public int foo() {
   if( someCondition ) {
      return 0;
   }

   return -1;
}

vs:

public int foo() {
   int returnValue = -1;

   if( someCondition ) {
      returnValue = 0;
   }

   return returnValue;
}

My team and I have found that abiding by this all the time is actually counter-productive in many cases.


switch-case is not object oriented programming

I often see a lot of switch-case or awful big if-else constructs. This is merely a sign for not putting state where it belongs and don't use the real and efficient switch-case construct that is already there: method lookup/vtable


Singletons are not evil

There is a place for singletons in the real world, and methods to get around them (i.e. monostate pattern) are simply singletons in disguise. For instance, a Logger is a perfect candidate for a singleton. Addtionally, so is a message pump. My current app uses distributed computing, and different objects need to be able to send appropriate messages. There should only be one message pump, and everyone should be able to access it. The alternative is passing an object to my message pump everywhere it might be needed and hoping that a new developer doesn't new one up without thinking and wonder why his messages are going nowhere. The uniqueness of the singleton is the most important part, not its availability. The singleton has its place in the world.


Never make up your mind on an issue before thoroughly considering said issue. No programming standard EVER justifies approaching an issue in a poor manner. If the standard demands a class to be written, but after careful thought, you deem a static method to be more appropriate, always go with the static method. Your own discretion is always better than even the best forward thinking of whoever wrote the standard. Standards are great if you're working in a team, but rules are meant to be broken (in good taste, of course).


A majority of the 'user-friendly' Fourth Generation Languages (SQL included) are worthless overrated pieces of rubbish that should have never made it to common use.

4GLs in general have a wordy and ambiguous syntax. Though 4GLs are supposed to allow 'non technical people' to write programs, you still need the 'technical' people to write and maintain them anyway.

4GL programs in general are harder to write, harder to read and harder to optimize than.

4GLs should be avoided as far as possible.


A Developer should never test their own software

Development and testing are two diametrically opposed disciplines. Development is all about construction, and testing is all about demolition. Effective testing requires a specific mindset and approach where you are trying to uncover developer mistakes, find holes in their assumptions, and flaws in their logic. Most people, myself included, are simply unable to place themselves and their own code under such scrutiny and still remain objective.


Lazy Programmers are the Best Programmers

A lazy programmer most often finds ways to decrease the amount of time spent writing code (especially a lot of similar or repeating code). This often translates into tools and workflows that other developers in the company/team can benefit from.

As the developer encounters similar projects he may create tools to bootstrap the development process (e.g. creating a DRM layer that works with the company's database design paradigms).

Furthermore, developers such as these often use some form of code generation. This means all bugs of the same type (for example, the code generator did not check for null parameters on all methods) can often be fixed by fixing the generator and not the 50+ instances of that bug.

A lazy programmer may take a few more hours to get the first product out the door, but will save you months down the line.


Architects that do not code are useless.

That sounds a little harsh, but it's not unreasonable. If you are the "architect" for a system, but do not have some amount of hands-on involvement with the technologies employed then how do you get the respect of the development team? How do you influence direction?

Architects need to do a lot more (meet with stakeholders, negotiate with other teams, evaluate vendors, write documentation, give presentations, etc.) But, if you never see code checked in from by your architect... be wary!


Hibernate is useless and damaging to the minds of developers.


That the Law of Demeter, considered in context of aggregation and composition, is an anti-pattern.


It is OK to use short variable names

But not for indices in nested loops.


It's a good idea to keep optimisation in mind when developing code.

Whenever I say this, people always reply: "premature optimisation is the root of all evil".

But I'm not saying optimise before you debug. I'm not even saying optimise ever, but when you're designing code, bear in mind the possibility that this might become a bottleneck, and write it so that it will be possible to refactor it for speed, without tearing the API apart.

Hugo


When someone dismisses an entire programming language as "clumsy", it usually turns out he doesn't know how to use it.


New web projects should consider not using Java.

I've been using Java to do web development for over 10 years now. At first, it was a step in the right direction compared to the available alternatives. Now, there are better alternatives than Java.

This is really just a specific case of the magic hammer approach to problem solving, but it's one that's really painful.


Software development is an art.



Uncommented code is the bane of humanity.

I think that comments are necessary for code. They visually divide it up into logical parts, and provide an alternative representation when reading code.

Documentation comments are the bare minimum, but using comments to split up longer functions helps when writing new code and allows quicker analysis when returning to existing code.


Assembly is the best first programming language.


Development teams should be segregated more often by technological/architectural layers instead of business function.

I come from a general culture where developers own "everything from web page to stored procedure". So in order to implement a feature in the system/application, they would prepare the database table schemas, write the stored procs, match the data access code, implement the business logic and web service methods, and the web page interfaces.

And guess what? Everybody has their own way to doing things! Everyone struggles to learn the ASP.NET AJAX and Telerik or Infragistic suites, Enterprise Library or other productivity and data layer and persistence frameworks, Aspect-oriented frameworks, logging and caching application blocks, DB2 or Oracle percularities. And guess what? Everybody takes heck of a long time to learn how to do things the proper way! Meaning, lots of mistakes in the meantime and plenty of resulting defects and performance bottlenecks! And heck of a longer time to fix them! Across each and every layer! Everybody has a hand in every Visual Studio project. Nobody is specialised to handle and optmise one problem/technology domain. Too many chefs spoil the soup. All the chefs result in some radioactive goo.

Developers may have cross-layer/domain responsibilities, but they should not pretend that they can be masters of all disciplines, and should be limited to only a few. In my experience, when a project is not a small one and utilises lots of technologies, covering more business functions in a single layer is more productive (as well as encouraging more test code test that layer) than covering less business functions spanning the entire architectural stack (which motivates developers to test only via their UI and not test code).


Object Oriented Programming is overused

Sometimes the best answer is the simple answer.


There are some (very few) legitimate uses for goto (particularly in C, as a stand-in for exception handling).


Uncommented code is the bane of humanity.

I think that comments are necessary for code. They visually divide it up into logical parts, and provide an alternative representation when reading code.

Documentation comments are the bare minimum, but using comments to split up longer functions helps when writing new code and allows quicker analysis when returning to existing code.


Okay, I said I'd give a bit more detail on my "sealed classes" opinion. I guess one way to show the kind of answer I'm interested in is to give one myself :)

Opinion: Classes should be sealed by default in C#

Reasoning:

There's no doubt that inheritance is powerful. However, it has to be somewhat guided. If someone derives from a base class in a way which is completely unexpected, this can break the assumptions in the base implementation. Consider two methods in the base class, where one calls another - if these methods are both virtual, then that implementation detail has to be documented, otherwise someone could quite reasonably override the second method and expect a call to the first one to work. And of course, as soon as the implementation is documented, it can't be changed... so you lose flexibility.

C# took a step in the right direction (relative to Java) by making methods sealed by default. However, I believe a further step - making classes sealed by default - would have been even better. In particular, it's easy to override methods (or not explicitly seal existing virtual methods which you don't override) so that you end up with unexpected behaviour. This wouldn't actually stop you from doing anything you can currently do - it's just changing a default, not changing the available options. It would be a "safer" default though, just like the default access in C# is always "the most private visibility available at that point."

By making people explicitly state that they wanted people to be able to derive from their classes, we'd be encouraging them to think about it a bit more. It would also help me with my laziness problem - while I know I should be sealing almost all of my classes, I rarely actually remember to do so :(

Counter-argument:

I can see an argument that says that a class which has no virtual methods can be derived from relatively safely without the extra inflexibility and documentation usually required. I'm not sure how to counter this one at the moment, other than to say that I believe the harm of accidentally-unsealed classes is greater than that of accidentally-sealed ones.


What strikes me as amusing about this question is that I've just read the first page of answers, and so far, I haven't found a single controversial opinion.

Perhaps that says more about the way stackoverflow generates consensus than anything else. Maybe I should have started at the bottom. :-)


You only need 3 to 5 languages to do everything. C is a definite. Maybe assembly but you should know it and be able to use it. Maybe javascript and/or Java if you code for the web. A shell language like bash and one HLL, like Lisp, which might be useful. Anything else is a distraction.


Extension Methods are the work of the Devil

Everyone seems to think that extension methods in .Net are the best thing since sliced bread. The number of developers singing their praises seems to rise by the minute but I'm afraid I can't help but despise them and unless someone can come up with a brilliant justification or example that I haven't already heard then I will never write one. I recently came across this thread and I must say reading the examples of the highest voted extensions made me feel a little like vomiting (metaphorically of course).

The main reasons given for their extensiony goodness are increased readability, improved OO-ness and the ability to chain method calls better.

I'm afraid I have to differ, I find in fact that they, unequivocally, reduce readability and OO-ness by virtue of the fact that they are at their core a lie. If you need a utility method that acts upon an object then write a utility method that acts on that object don't lie to me. When I see aString.SortMeBackwardsUsingKlingonSortOrder then string should have that method because that is telling me something about the string object not something about the AnnoyingNerdReferences.StringUtilities class.

LINQ was designed in such a way that chained method calls are necessary to avoid strange and uncomfortable expressions and the extension methods that arise from LINQ are understandable but in general chained method calls reduce readability and lead to code of the sort we see in obfuscated Perl contests.

So, in short, extension methods are evil. Cast off the chains of Satan and commit yourself to extension free code.


Performance does matter.


Not everything needs to be encapsulated into its own method. Some times it is ok to have a method do more then one thing.


Two lines of code is too many.

If a method has a second line of code, it is a code smell. Refactor.


Women make better programmers than men.

The female programmers I've worked with don't get wedded to "their" code as much as men do. They're much more open to criticism and new ideas.


Intranet Frameworks like SharePoint makes me think the whole corporate world is one giant ostrich with its head in the sand

I'm not only talking about MOSS here, I've worked with some other CORPORATE INTRANET products, and absolutely not one of them are great, but SharePoint (MOSS) is by far the worst.

  • Most of these systems don't easily bridge the gap between Intranet and Internet. So as a remote worker you're forced to VPN in. External customers just don't have the luxury of getting hold of your internal information first hand. Sure this can be fixed at a price $$$.
  • The search capabilities are always pathetic. Lots of time other departments simply don't know about information is out there.
  • Information fragments, people start boycotting workflows or revert to email
  • SharePoint development is the most painful form of development on the planet. Nothing sucks like SharePoint. I've seen a few developers contemplating quitting IT after working for over a year with MOSS.
  • No matter how the developers hate MOSS, no matter how long the most basic of projects take to roll out, no matter how novice the results look, and no matter how unsearchable and fragmented the content is:

EVERYONE STILL CONTINUES TO USE AND PURCHASE SHAREPOINT, AND MANAGERS STILL TRY VERY HARD TO PRETEND ITS NOT SATANS SPAWN.

Microformats

Using CSS classes originally designed for visual layout - now being assigned for both visual and contextual data is a hack, loads of ambiguity. Not saying the functionality should not exist, but fix the damn base language. HTML wasn't hacked to produce XML - instead the XML language emerged. Now we have these eager script kiddies hacking HTML and CSS to do something it wasn't designed to do, thats still fine, but I wish they would keep these things to themselves, and no make a standard out of it. Just to some up - butchery!


Excessive HTML in PHP files: sometimes necessary

Excessive Javascript in PHP files: trigger the raptor attack

While I have a hard time figuring out all your switching between echoing and ?>< ?php 'ing html (after all, php is just a processor for html), lines and lines of javascript added in make it a completely unmaintainable mess.

People have to grasp this: They are two separate programming languages. Pick one to be your primary language. Then go on and find a quick, clean and easily maintainable way to make your primary include the secondary language.

The reason why you jump between PHP, Javascript and HTML all the time is because you are bad at all three of them.

Ok, maybe its not exactly controversial. I had the impression this was a general frustration venting topic :)


Use type inference anywhere and everywhere possible.

Edit:

Here is a link to a blog entry I wrote several months ago about why I feel this way.

http://blogs.msdn.com/jaredpar/archive/2008/09/09/when-to-use-type-inference.aspx


The word 'evil' is an abused and overused word on Stackoverflow and simular forums.

People who use it have too little imagination.


If you want to write good software then step away from your computer

Go and hang out with the end users and the people who want and need the software. Only from them will you understand what your software needs to accomplish and how it needs to do that.

  • Ask them what the love & hate about the existing processes.
  • Ask them about the future of their processes, where it is headed.
  • Hang out and see what they use now and figure out their usage patterns. You need to meet and match their usage expectations. See what else they use a lot, particularly if they like it and can use it efficiently. Match that.

The end user doesn't give a rat's how elegant your code is or what language it's in. If it works for them and they like using it, you win. If it doesn't make their lives easier and better - they hate it, you lose.

Walk a mile in their shoes - then go write your code.


Write your spec when you are finished coding. (if at all)

In many projects I have been involved in, a great deal of effort was spent at the outset writing a "spec" in Microsoft Word. This process culminated in a "sign off" meeting when the big shots bought in on the project, and after that meeting nobody ever looked at this document again. These documents are a complete waste of time and don't reflect how software is actually designed. This is not to say there are not other valuable artifacts of application design. They are usually contained on index cards, snapshots of whiteboards, cocktail napkins and other similar media that provide a kind of timeline for the app design. These are usually are the real specs of the app. If you are going to write a Word document, (and I am not particularly saying you should) do it at the end of the project. At least it will accurately represent what has been done in the code and might help someone down the road like the the QA team or the next version developers.


If I were being controversial, I'd have to suggest that Jon Skeet isn't omnipotent..


The class library guidelines for implementing IDisposable are wrong.

I don't share this too often, but I believe that the guidance for the default implementation for IDisposable is completely wrong.

My issue isn't with the overload of Dispose and then removing the item from finalization, but rather, I despise how there is a call to release the managed resources in the finalizer. I personally believe that an exception should be thrown (and yes, with all the nastiness that comes from throwing it on the finalizer thread).

The reasoning behind it is that if you are a client or server of IDisposable, there is an understanding that you can't simply leave the object lying around to be finalized. If you do, this is a design/implementation flaw (depending on how it is left lying around and/or how it is exposed), as you are not aware of the lifetime of instances that you should be aware of.

I think that this type of bug/error is on the level of race conditions/synchronization to resources. Unfortunately, with calling the overload of Dispose, that error is never materialized.

Edit: I've written a blog post on the subject if anyone is interested:

http://www.caspershouse.com/post/A-Better-Implementation-Pattern-for-IDisposable.aspx


All variables/properties should be readonly/final by default.

The reasoning is a bit analogous to the sealed argument for classes, put forward by Jon. One entity in a program should have one job, and one job only. In particular, it makes absolutely no sense for most variables and properties to ever change value. There are basically two exceptions.

  1. Loop variables. But then, I argue that the variable actually doesn't change value at all. Rather, it goes out of scope at the end of the loop and is re-instantiated in the next turn. Therefore, immutability would work nicely with loop variables and everyone who tries to change a loop variable's value by hand should go straight to hell.

  2. Accumulators. For example, imagine the case of summing over the values in an array, or even a list/string that accumulates some information about something else.

    Today, there are better means to accomplish the same goal. Functional languages have higher-order functions, Python has list comprehension and .NET has LINQ. In all these cases, there is no need for a mutable accumulator / result holder.

    Consider the special case of string concatenation. In many environments (.NET, Java), strings are actually immutables. Why then allow an assignment to a string variable at all? Much better to use a builder class (i.e. a StringBuilder) all along.

I realize that most languages today just aren't built to acquiesce in my wish. In my opinion, all these languages are fundamentally flawed for this reason. They would lose nothing of their expressiveness, power, and ease of use if they would be changed to treat all variables as read-only by default and didn't allow any assignment to them after their initialization.


Developers are all different, and should be treated as such.

Developers don't fit into a box, and shouldn't be treated as such. The best language or tool for solving a problem has just as much to do with the developers as it does with the details of the problem being solved.


Design patterns are hurting good design more than they're helping it.

IMO software design, especially good software design is far too varied to be meaningfully captured in patterns, especially in the small number of patterns people can actually remember - and they're far too abstract for people to really remember more than a handful. So they're not helping much.

And on the other hand, far too many people become enamoured with the concept and try to apply patterns everywhere - usually, in the resulting code you can't find the actual design between all the (completely meaningless) Singletons and Abstract Factories.


MIcrosoft is not as bad as many say they are.


Human brain is the master key to all locks.

There is nothing in this world that can move faster your brain. Trust me this is not philosophical but practical. Well as far as opinions are concerned , they are as under


1) Never go outside the boundry specified in the programming language, A simple example would be pointers in C and C++. Dont misuse them as you are likely to get the DAMN SEGMENTATION FAULT.

2) Always follow the coding standards, yes what you are reading is correct, Coding standards do alot to your program, After all your program is written to be executed by machine but to be understood by some other brain :)


Upfront design - don't just start writing code because you're excited to write code

I've seen SO many apps that are poorly designed because the developer was so excited to get coding that they just opened up a white page and started writing code. I understand that things change during the development lifecycle. However, it's difficult working with applications that have several different layouts and development methodologies from form to form, method to method.

It's difficult to hit the target your application is to handle if you haven't clearly defined the task and how you plan to code it. Take some time (and not just 5 minutes) and make sure you've laid out as much of it has you can before you start coding. This way you'll avoid a spaghetti mess that your replacement will have to support.


  1. Good architecture is grown, not designed.

  2. Managers should make sure their team members always work below their state of the art, whatever that level is. When people work withing their comfort zone they produce higher quality code.


Using regexs to parse HTML is, in many cases, fine

Every time someone posts a question on Stack Overflow asking how to achieve some HTML manuipulation with a regex, the first answer is "Regex is a insufficient tool to parse HTML so don't do it". If the questioner was trying to build a web browser, this would be a helpful answer. However, usually the questioner wants to do some thing like add a rel tag to all the links to a certain domain, usually in a case when certain assumptions can be made about the style of the incoming markup, something that is entiely reasonable to do with a regex.


Programmers should avoid method hiding through inheritance at all costs.

In my experience, virtually every place I have ever seen inherited method hiding used it has caused problems. Method hiding results in objects behaving differently when accessed through a base type reference vs. a derived type reference - this is generally a Bad Thing. While many programmers are not formally aware of it, most intuitively expect that objects will adhere to the Liskov Substitution Principle. When objects violate this expectation, many of the assumptions inherent to object-oriented systems can begin to fray. The most egregious cases I've seen is when the hidden method alters the state of the object instance. In these cases, the behavior of the object can change in subtle ways that are difficult to debug and diagnose.

Ok, so there may be some infrequent cases where method hiding is actually useful and beneficial - like emulating return type covariance of methods in languages that don't support it. But the vast majority of time, when developers use method hiding it is either out of ignorance (or accident) or as a way to hack around some problem that probably deserves better design treatment. In general, the beneficial cases I've seen of method hiding (not to say there aren't others) is when a side-effect free method that returns some information is hidden by one that computes something more applicable to the calling context.

Languages like C# have improved things a bit by requiring the new keyword on methods that hide a base class method - at least helping avoid involuntary use of method hiding. But I find that many people still confuse the meaning of new with that of override - particularly since in simple scenarios their behavior can appear identical. It would be nice if tools like FxCop actually had built-in rules for identifying potentially bad usage of method hiding.

By the way, method hiding through inheritance should not be confused with other kinds of hiding - such as through nesting - which I believe is a valid and useful construct with fewer potential problems.


This one is mostly web related but...

Use Tables for your web page layouts

If I was developing a gigantic site that needed to squeeze performance I might think about it, but nothing gives me an easier way to get a consistent look out on the browser than tables. The majority of applications that I develop are for around 100-1000 users and possible 100 at a time max. The extra bloat of the tables aren't killing my server by any means.


XHTML is evil. Write HTML

You will have to set the MIME type to text/html anyway, so why fooling yourself into believing that you are really writing XML? Whoever is going to download your page is going to believe that it is HTML, so make it HTML.

And with that, feel free and happy to not close your <li>, it isn't necessary. Don't close the html tag, the file is over anyway. It is valid HTML and it can be parsed perfectly.

It will create more readable, less boilerplate code and you don't lose a thing. HTML parsers work good!

And when you are done, move on to HTML5. It is better.


I believe in the Zen of Python


Using Stored Proc is easy to maintain and less deployment vs Using ORM is OO way thus it is good

I've heard this lot in many of my projects, when ever this statements appear it is always tough get it settled.


Emacs is better


System.Data.DataSet Rocks!

Strongly-typed DataSets are better, in my opinion, than custom DDD objects for most business applications.

Reasoning: We're bending over backwards to figure out Unit of Work on custom objects, LINQ to SQL, Entity Framework and it's adding complexity. Use a nice code generator from somewhere to generate the data layer and the Unit of Work sits on the object collections (DataTable and DataSet)--no mystery.


The world needs more GOTOs

GOTOs are avoided religiously often with no reasoning beyond "my professor told me GOTOs are bad." They have a purpose and would greatly simplify production code in many places.

That said, they aren't really necessary in 99% of the code you'll ever write.


Opinion: Unit tests don't need to be written up front, and sometimes not at all.

Reasoning: Developers suck at testing their own code. We do. That's why we generally have test teams or QA groups.

Most of the time the code we write is to intertwined with other code to be tested separately, so we end up jumping through patterned hoops to provide testability. Not that those patterns are bad, but they can sometimes add unnecessary complexity, all for the sake of unit testing...

... which often doesn't work anyway. To write a comprehensive unit test requires alot of time. Often more time than we're willing to give. And the more comprehensive the test, the more brittle it becomes if the interface of the thing it's testing changes, forcing a rewrite of a test that no longer compiles.


SESE (Single Entry Single Exit) is not law

Example:

public int foo() {
   if( someCondition ) {
      return 0;
   }

   return -1;
}

vs:

public int foo() {
   int returnValue = -1;

   if( someCondition ) {
      returnValue = 0;
   }

   return returnValue;
}

My team and I have found that abiding by this all the time is actually counter-productive in many cases.


The word 'evil' is an abused and overused word on Stackoverflow and simular forums.

People who use it have too little imagination.


Test Constantly

You have to write tests, and you have to write them FIRST. Writing tests changes the way you write your code. It makes you think about what you want it to actually do before you just jump in and write something that does everything except what you want it to do.

It also gives you goals. Watching your tests go green gives you that little extra bump of confidence that you're getting something accomplished.

It also gives you a basis for writing tests for your edge cases. Since you wrote the code against tests to begin with, you probably have some hooks in your code to test with.

There is not excuse not to test your code. If you don't you're just lazy. I also think you should test first, as the benefits outweigh the extra time it takes to code this way.


A picture is not worth a thousand words.

Some pictures might be worth a thousand words. Most of them are not. This trite old aphorism is mostly untrue and is a pathetic excuse for many a lazy manager who did not want to read carefully created reports and documentation to say "I need you to show me in a diagram."

My wife studied for a linguistics major and saw several fascinating proofs against the conventional wisdom on pictures and logos: they do not break across language and cultural barriers, they usually do not communicate anywhere near as much information as correct text, they simply are no substitute for real communication.

In particular, labeled bubbles connected with lines are useless if the lines are unlabeled and unexplained, and/or if every line has a different meaning instead of signifying the same relationship (unless distinguished from each other in some way). If your lines sometimes signify relationships and sometimes indicate actions and sometimes indicate the passage of time, you're really hosed.

Every good programmer knows you use the tool suited for the job at hand, right? Not all systems are best specified and documented in pictures. Graphical specification languages that can be automatically turned into provably-correct, executable code or whatever are a spectacular idea, if such things exist. Use them when appropriate, not for everything under the sun. Entity-Relationship diagrams are great. But not everything can be summed up in a picture.

Note: a table may be worth its weight in gold. But a table is not the same thing as a picture. And again, a well-crafted short prose paragraph may be far more suitable for the job at hand.


Primitive data types are premature optimization.

There are languages that get by with just one data type, the scalar, and they do just fine. Other languages are not so fortunate. Developers just throw "int" and "double" in because they have to write in something.

What's important is not how big the data types are, but what the data is used for. If you have a day of the month variable, it doesn't matter much if it's signed or unsigned, or whether it's char, short, int, long, long long, float, double, or long double. It does matter that it's a day of the month, and not a month, or day of week, or whatever. See Joel's column on making things that are wrong look wrong; Hungarian notation as originally proposed was a Good Idea. As used in practice, it's mostly useless, because it says the wrong thing.


That software can be bug free if you have the right tools and take the time to write it properly.


Your job is to put yourself out of work.

When you're writing software for your employer, any software that you create is to be written in such a way that it can be picked up by any developer and understood with a minimal amount of effort. It is well designed, clearly and consistently written, formatted cleanly, documented where it needs to be, builds daily as expected, checked into the repository, and appropriately versioned.

If you get hit by a bus, laid off, fired, or walk off the job, your employer should be able to replace you on a moment's notice, and the next guy could step into your role, pick up your code and be up and running within a week tops. If he or she can't do that, then you've failed miserably.

Interestingly, I've found that having that goal has made me more valuable to my employers. The more I strive to be disposable, the more valuable I become to them.


Newer languages, and managed code do not make a bad programmer better.


Don't comment your code

Comments are not code and therefore when things change it's very easy to not change the comment that explained the code. Instead I prefer to refactor the crap out of code to a point that there is no reason for a comment. An example:

if(data == null)  // First time on the page

to:

bool firstTimeOnPage = data == null;
if(firstTimeOnPage)

The only time I really comment is when it's a TODO or explaining why

Widget.GetData(); // only way to grab data, TODO: extract interface or wrapper

I don't believe that any question related to optimization should be flooded with a chant of the misquoted "Premature optimization is the root of all evil"s because code that is optimized into obfuscation is what makes coding fun


Cowboy coders get more done.

I spend my life in the startup atmosphere. Without the Cowboy coders we'd waste endless cycles making sure things are done "right".

As we know it's basically impossible to forsee all issues. The Cowboy coder runs head-on into these problems and is forced to solve them much more quickly than someone who tries to forsee them all.

Though, if you're Cowboy coding you had better refactor that spaghetti before someone else has to maintain it. ;) The best ones I know use continuous refactoring. They get a ton of stuff done, don't waste time trying to predict the future, and through refactoring it becomes maintainable code.

Process always gets in the way of a good Cowboy, no matter how Agile it is.


Don't use stored procs in your database.

The reasons they were originally good - security, abstraction, single connection - can all be done in your middle tier with ORMs that integrate lots of other advantages.

This one is definitely controversial. Every time I bring it up, people tear me apart.


If it isn't worth testing, it isn't worth building


Debuggers are a crutch.

It's so controversial that even I don't believe it as much as I used to.

Con: I spend more time getting up to speed on other people's voluminous code, so anything that help with "how did I get here" and "what is happening" either pre-mortem or post-mortem can be helpful.

Pro: However, I happily stand by the idea that if you don't understand the answers to those questions for code that you developed yourself or that you've become familiar with, spending all your time in a debugger is not the solution, it's part of the problem.

Before hitting 'Post Your Answer' I did a quick Google check for this exact phrase, it turns out that I'm not the only one who has held this opinion or used this phrase. I turned up a long discussion of this very question on the Fog Creek software forum, which cited various luminaries including Linus Torvalds as notable proponents.


I'd say that my most controversial opinion on programming is that I honestly believe you shouldn't worry so much about throw-away code and rewriting code. Too many times people feel that if you write something down, then changing it means you did something wrong. But the way my brain works is to get something very simple working, and update the code slowly, while ensuring that the code and the test continue to function together. It may end up actually creating classes, methods, additional parameters, etc., I fully well know will go away in a few hours. But I do it because i want to take only small steps toward my goal. In the end, I don't think I spend any more time using this technique than the programmers that stare at the screen trying to figure out the best design up front before writing a line of code.

The benefit I get is that I'm not having to constantly deal with software that no longer works because I happen to break it somehow and am trying to figure out what stopped working and why.


I can live without closures.

Looks like nowadays everyone and their mother want closures to be present in a language because it is the greatest invention since sliced bread. And I think it is just another hype.


Avoid indentation.

Use early returns, continues or breaks.

instead of:

if (passed != NULL)
{
   for(x in list)
   {
      if (peter)
      {
          print "peter";
          more code.
          ..
          ..
      }
      else
      {
          print "no peter?!"
      }
   }
}

do:

if (pPassed==NULL)
    return false;

for(x in list)
{
   if (!peter)
   {
       print "no peter?!"
       continue;
   }

   print "peter";
   more code.
   ..
   ..
}

Before January 1st 1970, true and false were the other way around...


Open Source software costs more in the long run

For regular Line of Business companies, Open Source looks free but has hidden costs.

When you take into account inconsistency of quality, variable usability and UI/UX, difficulties of interoperability and standards, increased configuration, associated increased need for training and support, the Total Cost of Ownership for Open Source is much higher than commercial offerings.

Tech-savvy programmer-types take the liberation of Open Source and run with it; they 'get it' and can adopt it and customise it to suit their purposes. On the other hand, businesses that are primarily non-technical, but need software to run their offices, networks and websites are running the risk of a world of pain for themselves and heavy costs in terms of lost time, productivity and (eventually) support fees and/or the cost of abandoning the experiement all together.


Whenever you expose a mutable class to the outside world, you should provide events to make it possible to observe its mutation. The extra effort may also convince you to make it immutable after all.


I think we should move away from 'C'. Its too old!. But, the old dog is still barking louder!!


2 space indent.

No discussion. It just has to be that way ;-)


Code == Design

I'm no fan of sophisticated UML diagrams and endless code documentation. In a high level language, your code should be readable and understandable as is. Complex documentation and diagrams aren't really any more user friendly.


Here's an article on the topic of Code as Design.


Opinion: Unit tests don't need to be written up front, and sometimes not at all.

Reasoning: Developers suck at testing their own code. We do. That's why we generally have test teams or QA groups.

Most of the time the code we write is to intertwined with other code to be tested separately, so we end up jumping through patterned hoops to provide testability. Not that those patterns are bad, but they can sometimes add unnecessary complexity, all for the sake of unit testing...

... which often doesn't work anyway. To write a comprehensive unit test requires alot of time. Often more time than we're willing to give. And the more comprehensive the test, the more brittle it becomes if the interface of the thing it's testing changes, forcing a rewrite of a test that no longer compiles.


It's a good idea to keep optimisation in mind when developing code.

Whenever I say this, people always reply: "premature optimisation is the root of all evil".

But I'm not saying optimise before you debug. I'm not even saying optimise ever, but when you're designing code, bear in mind the possibility that this might become a bottleneck, and write it so that it will be possible to refactor it for speed, without tearing the API apart.

Hugo


Software is not an engineering discipline.

We never should have let the computers escape from the math department.


Programmers who don't code in their spare time for fun will never become as good as those that do.

I think even the smartest and most talented people will never become truly good programmers unless they treat it as more than a job. Meaning that they do little projects on the side, or just mess with lots of different languages and ideas in their spare time.

(Note: I'm not saying good programmers do nothing else than programming, but they do more than program from 9 to 5)


Lazy Programmers are the Best Programmers

A lazy programmer most often finds ways to decrease the amount of time spent writing code (especially a lot of similar or repeating code). This often translates into tools and workflows that other developers in the company/team can benefit from.

As the developer encounters similar projects he may create tools to bootstrap the development process (e.g. creating a DRM layer that works with the company's database design paradigms).

Furthermore, developers such as these often use some form of code generation. This means all bugs of the same type (for example, the code generator did not check for null parameters on all methods) can often be fixed by fixing the generator and not the 50+ instances of that bug.

A lazy programmer may take a few more hours to get the first product out the door, but will save you months down the line.


Dependency Management Software Does More Harm Than Good

I've worked on Java projects that included upwards of a hundred different libraries. In most cases, each library has its own dependencies, and those dependent libraries have their own dependencies too.

Software like Maven or Ivy supposedly "manage" this problem by automatically fetching the correct version of each library and then recursively fetching all of its dependencies.

Problem solved, right?

Wrong.

Downloading libraries is the easy part of dependency management. The hard part is creating a mental model of the software, and how it interacts with all those libraries.

My unpopular opinion is this:

If you can't verbally explain, off the top of your head, the basic interactions between all the libraries in your project, you should eliminate dependencies until you can.

Along the same lines, if it takes you longer than ten seconds to list all of the libraries (and their methods) invoked either directly or indirectly from one of your functions, then you are doing a poor job of managing dependencies.

You should be able to easily answer the question "which parts of my application actually depend on library XYZ?"

The current crop of dependency management tools do more harm than good, because they make it easy to create impossibly-complicated dependency graphs, and they provide virtually no functionality for reducing dependencies or identifying problems.

I've seen developers include 10 or 20 MB worth of libraries, introducing thousands of dependent classes into the project, just to eliminate a few dozen lines of simple custom code.

Using libraries and frameworks can be good. But there's always a cost, and tools which obscure that cost are inherently problematic.

Moreover, it's sometimes (note: certainly not always) better to reinvent the wheel by writing a few small classes that implement exactly what you need than to introduce a dependency on a large general-purpose library.


Opinion: Data driven design puts the cart before the horse. It should be eliminated from our thinking forthwith.

The vast majority of software isn't about the data, it's about the business problem we're trying to solve for our customers. It's about a problem domain, which involves objects, rules, flows, cases, and relationships.

When we start our design with the data, and model the rest of the system after the data and the relationships between the data (tables, foreign keys, and x-to-x relationships), we constrain the entire application to how the data is stored in and retrieved from the database. Further, we expose the database architecture to the software.

The database schema is an implementation detail. We should be free to change it without having to significantly alter the design of our software at all. The business layer should never have to know how the tables are set up, or if it's pulling from a view or a table, or getting the table from dynamic SQL or a stored procedure. And that type of code should never appear in the presentation layer.

Software is about solving business problems. We deal with users, cars, accounts, balances, averages, summaries, transfers, animals, messsages, packages, carts, orders, and all sorts of other real tangible objects, and the actions we can perform on them. We need to save, load, update, find, and delete those items as needed. Sometimes, we have to do those things in special ways.

But there's no real compelling reason that we should take the work that should be done in the database and move it away from the data and put it in the source code, potentially on a separate machine (introducing network traffic and degrading performance). Doing so means turning our backs on the decades of work that has already been done to improve the performance of stored procedures and functions built into databases. The argument that stored procedures introduce "yet another API" to be manged is specious: of course it does; that API is a facade that shields you from the database schema, including the intricate details of primary and foreign keys, transactions, cursors, and so on, and it prevents you from having to splice SQL together in your source code.

Put the horse back in front of the cart. Think about the problem domain, and design the solution around it. Then, derive the data from the problem domain.


Web applications suck

My Internet connection is veeery slow. My experience with almost every Web site that is not Google is, at least, frustrating. Why doesn't anybody write desktop apps anymore? Oh, I see. Nobody wants to be bothered with learning how operating systems work. At least, not Windows. The last time you had to handle WM_PAINT, your head exploded. Creating a worker thread to perform a long task (I mean, doing it the Windows way) was totally beyond you. What the hell was a callback? Oh, my God!


Garbage collection sucks

No, it actually doesn't. But it makes the programmers suck like nothing else. In college, the first language they taught us was Visual Basic (the original one). After that, there was another course where the teachers pretended they taught us C++. But the damage was done. Nobody actually knew how to use this esoteric keyword delete did. After testing our programs, we either got invalid address exceptions or memory leaks. Sometimes, we got both. Among the 1% of my faculty who can actually program, only one who can manage his memory by himself (at least, he pretends) and he's writing this rant. The rest write their programs in VB.NET, which, by definition, is a bad language.


Dynamic typing suck

Unless you're using assembler, of course (that's the kind of dynamic typing that actually deserves praise). What I meant is the overhead imposed by dynamic, interpreted languages makes them suck. And don't come with that silly argument that different tools are good for different jobs. C is the right language for almost everything (it's fast, powerful and portable), and, when it isn't (it's not fast enough), there's always inline assembly.


I might come up with more rants, but that will be later, not now.


Most consulting programmers suck and should not be allowed to write production code.

IMHO-Probably about 60% or more


Software Architects/Designers are Overrated

As a developer, I hate the idea of Software Architects. They are basically people that no longer code full time, read magazines and articles, and then tell you how to design software. Only people that actually write software full time for a living should be doing that. I don't care if you were the worlds best coder 5 years ago before you became an Architect, your opinion is useless to me.

How's that for controversial?

Edit (to clarify): I think most Software Architects make great Business Analysts (talking with customers, writing requirements, tests, etc), I simply think they have no place in designing software, high level or otherwise.


If you have any idea how to program you are not fit to place a button on a form

Is that controversial enough? ;)

No matter how hard we try, it's almost impossible to have appropriate empathy with 53 year old Doris who has to use our order-entry software. We simply cannot grasp the mental model of what she imagines is going on inside the computer, because we don't need to imagine: we know whats going on, or have a very good idea.

Interaction Design should be done by non-programmers. Of course, this is never actually going to happen. Contradictorily I'm quite glad about that; I like UI design even though deep down I know I'm unsuited to it.

For further info, read the book The Inmates Are Running the Asylum. Be warned, I found this book upsetting and insulting; it's a difficult read if you are a developer that cares about the user's experience.


Java is the COBOL of our generation.

Everyone learns to code it. There code for it running in big companies that will try to keep it running for decades. Everyone comes to despise it compared to all the other choices out there but are forced to use it anyway because it pays the bills.


There are only 2 kinds of people who use C (/C++): Those who don't know any other language, and those who are too lazy to learn a new one.


Hibernate is useless and damaging to the minds of developers.


Controversial to self, because some things are better be left unsaid, so you won't be painted by others as too egotist. However, here it is:

If it is to be, it begins with me



Using Stored Proc is easy to maintain and less deployment vs Using ORM is OO way thus it is good

I've heard this lot in many of my projects, when ever this statements appear it is always tough get it settled.


The code is the design


It IS possible to secure your application.

Every time someone asks a question about how to either prevent users from pirating their app, or secure it from hackers, the answer is that it's impossible. Nonsense. If you truly believe that, then leave your doors unlocked (or just take them off the house!). And don't bother going to the doctor, either. You're mortal - trying to cure a sickness is just postponing the inevitable.

Just because someone might be able to pirate your app or hack your system doesn't mean you shouldn't try to reduce the number of people who will do so. What you're really doing is making it require more work to break in than the intruder/pirate is willing to do.

Just like a deadbolt and ADT on your house will keep the burglars out, reasonable anti-piracy and security measures will keep hackers and pirates out of your way. Of course, the more tempting it would be for them to break in, the more security you need.


System.Data.DataSet Rocks!

Strongly-typed DataSets are better, in my opinion, than custom DDD objects for most business applications.

Reasoning: We're bending over backwards to figure out Unit of Work on custom objects, LINQ to SQL, Entity Framework and it's adding complexity. Use a nice code generator from somewhere to generate the data layer and the Unit of Work sits on the object collections (DataTable and DataSet)--no mystery.


"Programmers must do programming on the side, or they're never as good as those who do."

As kpollock said, imagine saying that for doctors, or soldiers...

The main thing isn't so much as whether they code, but whether they think about it. Computing Science is an intellectual exercise, you don't necessarily need to code to think about problems that makes you better as a programmer.

It's not like Einstein gets to play with play with particles and waves when he's off his research.


The code is the design


If you can only think of one way to do it, don't do it.

Whether it's an interface layout, a task flow, or a block of code, just stop. Do something to collect more ideas, like asking other people how they would do it, and don't go back to implementing until you have at least three completely different ideas and at least one crisis of confidence.

Generally, when I think something can only be done one way, or think only one method has any merit, it's because I haven't thought through the factors which ought to be influencing the design thoroughly enough. If I had, some of them would clearly be in conflict, leading to a mess and thus an actual decision rather than a rote default.

Being a solid programmer does not make you a solid interface designer

And following all of the interface guidelines in the world will only begin to help. If it's even humanly possible... There seems to be a peculiar addiction to making things 'cute' and 'clever'.


My controversial opinion? Java doesn't suck but Java API's do. Why do java libraries insist on making it hard to do simple tasks? And why, instead of fixing the APIs, do they create frameworks to help manage the boilerplate code? This opinion can apply to any language that requires 10 or more lines of code to read a line from a file.


PHP sucks ;-)

The proof is in the pudding.


(Unnamed) tuples are evil

  • If you're using tuples as a container for several objects with unique meanings, use a class instead.
  • If you're using them to hold several objects that should be accessible by index, use a list.
  • If you're using them to return multiple values from a method, use Out parameters instead (this does require that your language supports pass-by-reference)

  • If it's part of a code obfuscation strategy, keep using them!

I see people using tuples just because they're too lazy to bother giving NAMES to their objects. Users of the API are then forced to access items in the tuple based on a meaningless index instead of a useful name.


Java is not the best thing out there. Just because it comes with an 'Enterprise' sticker does not make it good. Nor does it make it fast. Nor does it make it the answer to every question.

Also, ROR is not all it is cracked up to be by the Blogsphere.

While I am at it, OOP is not always good. In fact, I think it is usually bad.


There is a difference between a programmer and a developer. An example: a programmer writes pagination logic, a developer integrates pagination on a page.


Useful and clean high-level abstractions are significantly more important than performance

one example:

Too often I watch peers spending hours writing over complicated Sprocs, or massive LINQ queries which return unintuitive anonymous types for the sake of "performance".

They could achieve almost the same performance but with considerably cleaner, intuitive code.


A random collection of Cook's aphorisms...

  • The hardest language to learn is your second.

  • The hardest OS to learn is your second one - especially if your first was an IBM mainframe.

  • Once you've learned several seemingly different languages, you finally realize that all programming languages are the same - just minor differences in syntax.

  • Although one can be quite productive and marketable without having learned any assembly, no one will ever have a visceral understanding of computing without it.

  • Debuggers are the final refuge for programmers who don't really know what they're doing in the first place.

  • No OS will ever be stable if it doesn't make use of hardware memory management.

  • Low level systems programming is much, much easier than applications programming.

  • The programmer who has a favorite language is just playing.

  • Write the User's Guide FIRST!

  • Policy and procedure are intended for those who lack the initiative to perform otherwise.

  • (The Contractor's Creed): Tell'em what they need. Give'em what they want. Make sure the check clears.

  • If you don't find programming fun, get out of it or accept that although you may make a living at it, you'll never be more than average.

  • Just as the old farts have to learn the .NET method names, you'll have to learn the library calls. But there's nothing new there.
    The life of a programmer is one of constantly adapting to different environments, and the more tools you have hung on your belt, the more versatile and marketable you'll be.

  • You may piddle around a bit with little code chunks near the beginning to try out some ideas, but, in general, one doesn't start coding in earnest until you KNOW how the whole program or app is going to be layed out, and you KNOW that the whole thing is going to work EXACTLY as advertised. For most projects with at least some degree of complexity, I generally end up spending 60 to 70 percent of the time up front just percolating ideas.

  • Understand that programming has little to do with language and everything to do with algorithm. All of those nifty geegaws with memorable acronyms that folks have come up with over the years are just different ways of skinning the implementation cat. When you strip away all the OOPiness, RADology, Development Methodology 37, and Best Practice 42, you still have to deal with the basic building blocks of:

    • assignments
    • conditionals
    • iterations
    • control flow
    • I/O

Once you can truly wrap yourself around that, you'll eventually get to the point where you see (from a programming standpoint) little difference between writing an inventory app for an auto parts company, a graphical real-time TCP performance analyzer, a mathematical model of a stellar core, or an appointments calendar.

  • Beginning programmers work with small chunks of code. As they gain experience, they work with ever increasingly large chunks of code.
    As they gain even more experience, they work with small chunks of code.

Objects Should Never Be In An Invalid State

Unfortunately, so many of the ORM framework mandate zero-arg constructors for all entity classes, using setters to populate the member variables. In those cases, it's very difficult to know which setters must be called in order to construct a valid object.

MyClass c = new MyClass(); // Object in invalid state. Doesn't have an ID.
c.setId(12345); // Now object is valid.

In my opinion, it should be impossible for an object to ever find itself in an invalid state, and the class's API should actively enforce its class invariants after every method call.

Constructors and mutator methods should atomically transition an object from one valid state to another. This is much better:

MyClass c = new MyClass(12345); // Object starts out valid. Stays valid.

As the consumer of some library, it's a huuuuuuge pain to keep track of whether all the right setters have been invoked before attempting to use an object, since the documentation usually provides no clues about the class's contract.


Ternary operators absolutely suck. They are the epitome of lazy ass programing.

user->isLoggedIn() ? user->update() : user->askLogin();

This is so easy to screw up. A little change in revision #2:

user->isLoggedIn() && user->isNotNew(time()) ? user->update() : user->askLogin();

Oh yeah, just one more "little change."

user->isLoggedIn() && user->isNotNew(time()) ? user->update() 
    : user->noCredentials() ? user->askSignup
        : user->askLogin();

Oh crap, what about that OTHER case?

user->isLoggedIn() && user->isNotNew(time()) && !user->isBanned() ? user->update() 
    : user->noCredentials() || !user->isBanned() ? user->askSignup()
        : user->askLogin();

NO NO NO NO. Just save us the code change. Stop being freaking lazy:

if (user->isLoggedIn()) {
    user->update()
} else {
    user->askLogin();
}

Because doing it right the first time will save us all from having to convert your crap ternaries AGAIN and AGAIN:

if (user->isLoggedIn() && user->isNotNew(time()) && !user->isBanned()) {
    user->update()
} else {
    if (user->noCredentials() || !user->isBanned()) {
        user->askSignup();
    } else {
        user->askLogin();
    }
}

Opinion: Duration in the development field does not always mean the same as experience.

Many trades look at "years of experience" in a language. Yes, 5 years of C# can make sense since you may learn new tricks and what not. However, if you are with the company and maintaining the same code base for a number of years, I feel as if you are not gaining the amount of exposure to different situations as a person who works on different situations and client needs.

I once interviewed a person who prided himself on having 10 years of programming experience and worked with VB5, 6, and VB.Net...all in the same company during that time. After more probing, I found out that while he worked with all of those versions of VB, he was only upgrading and constantly maintaining his original VB5 app. Never modified the architecture and let the upgrade wizards do their thing. I have interviewed people who only have 2 years in the field but have worked on multiple projects that have more "experience" than him.


Development is 80% about the design and 20% about coding

I believe that developers should spend 80% of time designing at the fine level of detail, what they are going to build and only 20% actually coding what they've designed. This will produce code with near zero bugs and save a lot on test-fix-retest cycle.

Getting to the metal (or IDE) early is like premature optimization, which is know to be a root of all evil. Thoughtful upfront design (I'm not necessarily talking about enormous design document, simple drawings on white board will work as well) will yield much better results than just coding and fixing.


Associative Arrays / Hash Maps / Hash Tables (+whatever its called in your favourite language) are the best thing since sliced bread!

Sure, they provide fast lookup from key to value. But they also make it easy to construct structured data on the fly. In scripting languages its often the only (or at least most used) way to represent structured data.

IMHO they were a very important factor for the success of many scripting languages.

And even in C++ std::map and std::tr1::unordered_map helped me writing code faster.


We're software developers, not C/C#/C++/PHP/Perl/Python/Java/... developers.

After you've been exposed to a few languages, picking up a new one and being productive with it is a small task. That is to say that you shouldn't be afraid of new languages. Of course, there is a large difference between being productive and mastering a language. But, that's no reason to shy away from a language you've never seen. It bugs me when people say, "I'm a PHP developer." or when a job offer says, "Java developer". After a few years experience of being a developer, new languages and APIs really shouldn't be intimidating and going from never seeing a language to being productive with it shouldn't take very long at all. I know this is controversial but it's my opinion.


Dependency Management Software Does More Harm Than Good

I've worked on Java projects that included upwards of a hundred different libraries. In most cases, each library has its own dependencies, and those dependent libraries have their own dependencies too.

Software like Maven or Ivy supposedly "manage" this problem by automatically fetching the correct version of each library and then recursively fetching all of its dependencies.

Problem solved, right?

Wrong.

Downloading libraries is the easy part of dependency management. The hard part is creating a mental model of the software, and how it interacts with all those libraries.

My unpopular opinion is this:

If you can't verbally explain, off the top of your head, the basic interactions between all the libraries in your project, you should eliminate dependencies until you can.

Along the same lines, if it takes you longer than ten seconds to list all of the libraries (and their methods) invoked either directly or indirectly from one of your functions, then you are doing a poor job of managing dependencies.

You should be able to easily answer the question "which parts of my application actually depend on library XYZ?"

The current crop of dependency management tools do more harm than good, because they make it easy to create impossibly-complicated dependency graphs, and they provide virtually no functionality for reducing dependencies or identifying problems.

I've seen developers include 10 or 20 MB worth of libraries, introducing thousands of dependent classes into the project, just to eliminate a few dozen lines of simple custom code.

Using libraries and frameworks can be good. But there's always a cost, and tools which obscure that cost are inherently problematic.

Moreover, it's sometimes (note: certainly not always) better to reinvent the wheel by writing a few small classes that implement exactly what you need than to introduce a dependency on a large general-purpose library.


My one:

Long switch statements are your friends. Really. At least in C#.

People tend to avoid and discourage others to use long switch statements beause they are "unmanagable" and "have bad performance characteristics".

Well, the thing is that in C#, switch statements are always compiled automagically to hash jump tables so actually using them is the Best Thing To Do™ in terms of performance if you need simple branching to multiple branches. Also, if the case statements are organized and grouped intelligently (for example in alphabetical order), they are not unmanageable at all.


I hate universities and institutes offering short courses for teaching programming to new comers. It is outright disgrace and contempt for the art1 and science of programming.

They start teaching C, Java, VB (disgusting) to the people without good grasp on hardware and fundamental principals of computers. The should first be taught about the MACHINE by books like Morris Mano's Computer System Architecture and then taught the concept of instructing machine to solve problems instead of etching semantics and syntax of one programming language.

Also I don't understand government schools, colleges teaching children basics of computers using commercial operating systems and softwares. At least in my country (India) not many students afford to buy operating systems and even discounted office suits let alone the development software juggernaut (compilers, IDEs etc). This prompts theft and piracy and make this act of copying and stealing software from their institutes' libraries a justified act.

Again they are taught to use some products not the fundamental ideas.

Think about it if you were taught only that 2x2 is 4 and not the concept of multiplication?

Or if you were taught now to measure the length of pole inclined to some compound wall of your school but not the Pythagoras theorem


Upfront design - don't just start writing code because you're excited to write code

I've seen SO many apps that are poorly designed because the developer was so excited to get coding that they just opened up a white page and started writing code. I understand that things change during the development lifecycle. However, it's difficult working with applications that have several different layouts and development methodologies from form to form, method to method.

It's difficult to hit the target your application is to handle if you haven't clearly defined the task and how you plan to code it. Take some time (and not just 5 minutes) and make sure you've laid out as much of it has you can before you start coding. This way you'll avoid a spaghetti mess that your replacement will have to support.


Lower level languages are inappropriate for most problems.


Test Constantly

You have to write tests, and you have to write them FIRST. Writing tests changes the way you write your code. It makes you think about what you want it to actually do before you just jump in and write something that does everything except what you want it to do.

It also gives you goals. Watching your tests go green gives you that little extra bump of confidence that you're getting something accomplished.

It also gives you a basis for writing tests for your edge cases. Since you wrote the code against tests to begin with, you probably have some hooks in your code to test with.

There is not excuse not to test your code. If you don't you're just lazy. I also think you should test first, as the benefits outweigh the extra time it takes to code this way.


Modern C++ is a beautiful language.

There, I said it. A lot of people really hate C++, but honestly, I find modern C++ with STL/Boost style programming to be a very expressive, elegant, and incredibly productive language most of the time.

I think most people who hate C++ are basing that on bad experiences with OO. C++ doesn't do OO very well because polymorphism often depends on heap-allocated objects, and C++ doesn't have automatic garbage collection.

But C++ really shines when it comes to generic libraries and functional-programming techniques which make it possible to build incredibly large, highly-maintainable systems. A lot of people say C++ tries to do everything, but ends up doing nothing very well. I'd probably agree that it doesn't do OO as well as other languages, but it does generic programming and functional programming better than any other mainstream C-based language. (C++0x will only further underscore this truth.)

I also appreciate how C++ lets me get low-level if necessary, and provides full access to the operating system.

Plus RAII. Seriously. I really miss destructors when I program in other C-based languages. (And no, garbage collection does not make destructors useless.)


Functional programming is NOT more intuitive or easier to learn than imperative programming.

There are many good things about functional programming, but I often hear functional programmers say it's easier to understand functional programming than imperative programming for people with no programming experience. From what I've seen it's the opposite, people find trivial problems hard to solve because they don't get how to manage and reuse their temporary results when you end up in a world without state.


Any sufficiently capable library is too complicated to be useable and any library simple enough to be usable lacks that capabilities needed to be a good general solution.

I run in to this constantly. Exhaustive libraries that are so complicated to use I tear my hair out and simple easy to use libraries that don't quite do what I need them to do.


Use type inference anywhere and everywhere possible.

Edit:

Here is a link to a blog entry I wrote several months ago about why I feel this way.

http://blogs.msdn.com/jaredpar/archive/2008/09/09/when-to-use-type-inference.aspx


Source Control: Anything But SourceSafe

Also: Exclusive locking is evil.

I once worked somewhere where they argued that exclusive locks meant that you were guaranteeing that people were not overwriting someone else's changes when you checked in. The problem was that in order to get any work done, if a file was locked devs would just change their local file to writable and merging (or overwriting) the source control with their version when they had the chance.


Debuggers are a crutch.

It's so controversial that even I don't believe it as much as I used to.

Con: I spend more time getting up to speed on other people's voluminous code, so anything that help with "how did I get here" and "what is happening" either pre-mortem or post-mortem can be helpful.

Pro: However, I happily stand by the idea that if you don't understand the answers to those questions for code that you developed yourself or that you've become familiar with, spending all your time in a debugger is not the solution, it's part of the problem.

Before hitting 'Post Your Answer' I did a quick Google check for this exact phrase, it turns out that I'm not the only one who has held this opinion or used this phrase. I turned up a long discussion of this very question on the Fog Creek software forum, which cited various luminaries including Linus Torvalds as notable proponents.


My controversial view is that the "While" construct should be removed from all programming languages.

You can easily replicate While using "Repeat" and a boolean flag, and I just don't believe that it's useful to have the two structures. In fact, I think that having both "Repeat...Until" and "While..EndWhile" in a language confuses new programmers.

Update - Extra Notes

One common mistake new programmers make with While is they assume that the code will break as soon as the tested condition flags false. So - If the While test flags false half way through the code, they assume a break out of the While Loop. This mistake isn't made as much with Repeat.

I'm actually not that bothered which of the two loops types is kept, as long as there's only one loop type. Another reason I have for choosing Repeat over While is that "While" functionality makes more sense written using "repeat" than the other way around.

Second Update: I'm guessing that the fact I'm the only person currently running with a negative score here means this actually is a controversial opinion. (Unlike the rest of you. Ha!)


This one is not exactly on programming, because html/css are not programming languages.

Tables are ok for layout

css and divs can't do everything, save yourself the hassle and use a simple table, then use css on top of it.


Design patterns are bad.

Actually, design patterns aren't.

You can write bad code, and bury it under a pile of patterns. Use singletons as global variables, and states as goto's. Whatever.

A design pattern is a standard solution for a particular problem, but requires you to understand the problem first. If you don't, design patterns become a part of the problem for the next developer.


If you haven't read a man page, you're not a real programmer.


All source code and comments should be written in English

Writing source code and/or comments in languages other than English makes it less reusable and more difficult to debug if you don't understand the language they are written in.

Same goes for SQL tables, views, and columns, especially when abbrevations are used. If they aren't abbreviated, I might be able to translate the table/column name on-line, but if they're abbreviated all I can do is SELECT and try to decipher the results.


If you want to write good software then step away from your computer

Go and hang out with the end users and the people who want and need the software. Only from them will you understand what your software needs to accomplish and how it needs to do that.

  • Ask them what the love & hate about the existing processes.
  • Ask them about the future of their processes, where it is headed.
  • Hang out and see what they use now and figure out their usage patterns. You need to meet and match their usage expectations. See what else they use a lot, particularly if they like it and can use it efficiently. Match that.

The end user doesn't give a rat's how elegant your code is or what language it's in. If it works for them and they like using it, you win. If it doesn't make their lives easier and better - they hate it, you lose.

Walk a mile in their shoes - then go write your code.


Classes should fit on the screen.

If you have to use the scroll bar to see all of your class, your class is too big.

Code folding and miniature fonts are cheating.


Web services absolutely suck, and are not the way of the future. They are ridiculously inefficient and they don't guarantee ordered delivery. Web services should NEVER be used within a system where both client and server are being written. They are mostly useful for micky mouse mash-up type applications. They should definitely not be used for any kind of connection-oriented communication.

This stance has gotten myself and colleagues into some very heated discussions, since web services is such a buzzy topic. Any project that mandates the use of web services is doomed because it is clearly already having ridiculous demands pushed down from management.


Developers should be able to modify production code without getting permission from anyone as long as they document their changes and notify the appropriate parties.


Use type inference anywhere and everywhere possible.

Edit:

Here is a link to a blog entry I wrote several months ago about why I feel this way.

http://blogs.msdn.com/jaredpar/archive/2008/09/09/when-to-use-type-inference.aspx


Design patterns are a waste of time when it comes to software design and development.

Don't get me wrong, design patterns are useful but mainly as a communication vector. They can express complex ideas very concisely: factory, singleton, iterator...

But they shouldn't serve as a development method. Too often developers architect their code using a flurry of design pattern-based classes where a more concise design would be better, both in term of readability and performance. All that with the illusion that individual classes could be reused outside their domain. If a class is not designed for reuse or isn't part of the interface, then it's an implementation detail.

Design patterns should be used to put names on organizational features, not to dictate the way code must be written.

(It was supposed to be controversial, remember?)


I really dislike when people tell me to use getters and setters instead of making the variable public when you should be able to both get and set the class variable.

I totally agree on it if it's to change a variable in an object in your object, so you don't get things like: a.b.c.d.e = something; but I would rather use: a.x = something; then a.setX(something); I think a.x = something; actually are both easier to read, and prettier then set/get in the same example.

I don't see the reason by making:

void setX(T x) { this->x = x; }

T getX() { return x; }

which is more code, more time when you do it over and over again, and just makes the code harder to read.


Separation of concerns is evil :)

Only separate concerns if you have good reason for it. Otherwise, don't separate them.

I have encountered too many occasions of separation only for the sake of separation. The second half of Dijkstra's statement "Minimal coupling, maximal cohesion" should not be forgotten. :)

Happy to discuss this further.


If you're a developer, you should be able to write code

I did quite a bit of interviewing last year, and for my part of the interview I was supposed to test the way people thought, and how they implemented simple-to-moderate algorithms on a white board. I'd initially started out with questions like:

Given that Pi can be estimated using the function 4 * (1 - 1/3 + 1/5 - 1/7 + ...) with more terms giving greater accuracy, write a function that calculates Pi to an accuracy of 5 decimal places.

It's a problem that should make you think, but shouldn't be out of reach to a seasoned developer (it can be answered in about 10 lines of C#). However, many of our (supposedly pre-screened by the agency) candidates couldn't even begin to answer it, or even explain how they might go about answering it. So after a while I started asking simpler questions like:

Given the area of a circle is given by Pi times the radius squared, write a function to calculate the area of a circle.

Amazingly, more than half the candidates couldn't write this function in any language (I can read most popular languages so I let them use any language of their choice, including pseudo-code). We had "C# developers" who could not write this function in C#.

I was surprised by this. I had always thought that developers should be able to write code. It seems that, nowadays, this is a controversial opinion. Certainly it is amongst interview candidates!


Edit:

There's a lot of discussion in the comments about whether the first question is a good or bad one, and whether you should ask questions as complex as this in an interview. I'm not going to delve into this here (that's a whole new question) apart from to say you're largely missing the point of the post.

Yes, I said people couldn't make any headway with this, but the second question is trivial and many people couldn't make any headway with that one either! Anybody who calls themselves a developer should be able to write the answer to the second one in a few seconds without even thinking. And many can't.


Exceptions should only be used in truly exceptional cases

It seems like the use of exceptions has run rampant on the projects I've worked on recently.

Here's an example:

We have filters that intercept web requests. The filter calls a screener, and the screener's job is to check to see if the request has certain input parameters and validate the parameters. You set the fields to check for, and the abstract class makes sure the parameters are not blank, then calls a screen() method implemented by your particular class to do more extended validation:

public boolean processScreener(HttpServletRequest req, HttpServletResponse resp, FilterConfig filterConfig) throws Exception{           
            // 
            if (!checkFieldExistence(req)){
                    return false;
            }
            return screen(req,resp,filterConfig);
    }

That checkFieldExistance(req) method never returns false. It returns true if none of the fields are missing, and throws an exception if a field is missing.

I know that this is bad design, but part of the problem is that some architects here believe that you need to throw an exception every time you hit something unexpected.

Also, I am aware that the signature of checkFieldExistance(req) does throw an Exception, its just that almost all of our methods do - so it didn't occur to me that the method might throw an exception instead of returning false. Only until I dug through the code I noticed it.


Software Architects/Designers are Overrated

As a developer, I hate the idea of Software Architects. They are basically people that no longer code full time, read magazines and articles, and then tell you how to design software. Only people that actually write software full time for a living should be doing that. I don't care if you were the worlds best coder 5 years ago before you became an Architect, your opinion is useless to me.

How's that for controversial?

Edit (to clarify): I think most Software Architects make great Business Analysts (talking with customers, writing requirements, tests, etc), I simply think they have no place in designing software, high level or otherwise.


You don't always need a database.

If you need to store less than a few thousand "things" and you don't need locking, flat files can work and are better in a lot of ways. They are more portable, and you can hand edit them in a pinch. If you have proper separation between your data and business logic, you can easily replace the flat files with a database if your app ever needs it. And if you design it with this in mind, it reminds you to have proper separation between your data and business logic.

--
bmb


Opinion: developers should be testing their own code

I've seen too much crap handed off to test only to have it not actually fix the bug in question, incurring communication overhead and fostering irresponsible practices.


Member variables should never be declared private (in java)

If you declare something private, you prevent any future developer from deriving from your class and extending the functionality. Essentially, by writing "private" you are implying that you know more now about how your class can be used than any future developer might ever know. Whenever you write "private", you ought to write "protected" instead.

Classes should never be declared final (in java)

Similarly, if you declare a class as final (which prevents it from being extended -- prevents it from being used as a base class for inheritance), you are implying that you know more than any future programmer might know, about what is the right and proper way to use your class. This never a good idea. You don't know everything. Someone might come up with a perfectly suitable way to extend your class that you didn't think of.

Java Beans are a terrible idea.

The java bean convention -- declaring all members as private and then writing get() and set() methods for every member -- forces programmers to write boilerplate, error-prone, tedious, and lengthy code, where no code is needed. Just make public members variables public! Trust in your ability to change it later, if you need to change the implementation (hint: 99% of the time, you never will).


Development projects are bound to fail unless the team of programmers is given as a whole complete empowerment to make all decisions related to the technology being used.


It is OK to use short variable names

But not for indices in nested loops.


Any sufficiently capable library is too complicated to be useable and any library simple enough to be usable lacks that capabilities needed to be a good general solution.

I run in to this constantly. Exhaustive libraries that are so complicated to use I tear my hair out and simple easy to use libraries that don't quite do what I need them to do.


Manually halting a program is an effective, proven way to find performance problems.

Believable? Not to most. True? Absolutely.

Programmers are far more judgmental than necessary.

Witness all the things considered "evil" or "horrible" in these posts.

Programmers are data-structure-happy.

Witness all the discussions of classes, inheritance, private-vs-public, memory management, etc., versus how to analyze requirements.