Reuse is a good thing right? Give the option to reuse good code or write your own, you would always pick reuse. We've heard tales of horrible programmers and their NIH (Not Invented Here) syndrome and we're not like those guys. We want to leave our pride at the door and hold up the shining shield of reuse and the burning sword of standardization.
Not so fast.
Reuse, like ANYTHING ELSE, has its tradeoffs. On the one hand, you get "free code". On the other hand, its not free because you have to learn how to use it, debug it, and write integration code to get it into your project. On the one hand, you get "free upgrades" as a separate development team makes upgrades and enhancements. On the other hand, you suseptible to changing interfaces, newly introduced bugs, and silent but deadly logic changes. You are dependent on that code and its uncertain future.
Joel defends NIH for good reason. His basic premise is something I've been saying for quite some time: software development is the practice of managing dependencies and the dependencies that are easiest to manage are those that are not there. Just like in life, the more dependencies you have, the less agile you are (those with children: how easy is it to do something spontaneously?) - and in the tech industry agility is everything.
Often the development cost of the product increases when you eliminate dependencies because of some duplication; however, the ability to improve the product quickly also increases and that will gain enough income to overcome the incresed development costs. Now, don't get me wrong, you shouldn't go rewrite Windows in order to not be dependant on it; there is something to be said for interoperability as well. However, if it is core to what you do or it plays a vital role in your product then you should own it and it should have a one-to-one dependency chain between you and it. The cost will be well worth the rewards.
A Lost Voice
6 hours ago