What if software is not maintainable and efficient
This includes fixing bugs, optimizing existing functionality and adjusting code to prevent future issues. Software maintenance is the most expensive phase of development, typically consuming more than half of development budgets. It is important to plan maintenance into the development lifecycle so you can maintain software efficiently. There are a number of reasons to maintain software after you have delivered it to the customer:.
Software quality and code quality can make a world of difference for software maintenance. Poor quality software is harder to maintain. Bad code requires a larger effort and costs more to adapt to new requirements. Some languages Haskell for example enforce this at a programmatic and syntactic level. Somebody may have already implemented a solution you can leverage.
Take the time to think about and research any such options, if appropriate and available. By using a 3rd party or open source library that adds some interesting functionality, you are making the commitment to, and becoming dependent upon, that library. Below are some of the more common examples of things you should probably not be reinventing in the modern age in your project unless these ARE your projects. Figure out which of CAP you need for your project, then chose the database with the right properties.
You should, in most circumstances, not be writing raw queries to whatever database you happen to chose to use. More likely than not, there exists a library to sit in between the DB and your application code, separating the concerns of managing concurrent database sessions and details of the schema from your main code.
At the very least, you should never have raw queries or SQL inline in the middle of your application code. Rather, wrap it in a function and centralize all the functions in a file called something really obvious e. This type of centralization also makes it much easier to have consistent style in your queries, and limits the number of places to go to change the queries should the schema change. Choose whichever you like best, and stick to it.
Different kinds of logic dealing with different kinds of data should be physically isolated in the codebase again, this separation of concerns concept and reducing cognitive load on the future-reader. The code which updates your UI should be physically distinct from the code that calculates what goes into the UI, for example. If the compiler can catch logical errors in your code and prevent either bad behavior, bugs, or outright crashes, we absolutely should take advantage of that.
Of course, some languages have compilers that make this easier than others. In my role as Engineering Manager at Capital One I work to impress the following standards on my teams to ensure that we are delivering maintainable software solutions. Designing a maintainable solution, calls for a modularized solution with reusable components. Targeting highly reusable components and modularization of every single feature will require expert developers, thereby increasing cost.
But, these aspects will be beneficial in the long run due to the decreased cost of maintenance and flexibility to make changes. A good design should strive to balance these aspects against the requirements of the product. While most of these aspects can be handled by the product leveraging a good framework, every developer must still take these aspects into consideration while writing code.
All good software systems must have a good logging scheme, and this logging must be done with a purpose. Every log event must be comprehensive containing meaningful information. In his article Logging Wisdom: How to Log , Emil Stenqvist states that software programs must write log as if it is a journal of its execution: major branching points, processes starting, etc. Logs must be written so that they capture the data that is meaningful for the purpose that it is written for.
Logging needs must be identified at the time of feature grooming. Few examples of motivations for logging are to:. That is great, but where is the information about:. The motivation for logging must drive the details that go into logging. For example, if a log event is written when a user sees an error message, it is important to log the user ID, date, and time of the error, as well as the details of system state or data that resulted in the user seeing that error message.
It is important to be careful not to store sensitive information in logs or encrypt them if they are needed. Error messages displayed to the user must help the user understand why they received the error and what steps they can take to resolve the error.
In addition, if these errors are caused by system problems, then all relevant data to understand what caused the error must be logged in the application logs. This will help the support teams to quickly identify why the error happened. Effort must be made to make messages unique so when the user has questions about it, support teams can quickly provide an answer rather than trying to identify which one of the many reasons could have caused the issue.
Ensuring that infrastructure and application monitoring is designed and implemented at the time of application development is a key criterion to making good maintainable software. While infrastructure monitoring can be handled by monitoring aspects like memory, CPU utilization, number of instances, etc.
In his blogpost Logging V. Instrumentation , Peter Bourgon talks about when to use logging versus when to use instrumentation to ultimately increase the system observability. Coming at your question from the side of a developer who works on high-performance code, there are several things to consider in design. We've all been "taught" that there are tradeoff curves. Also, we have all assumed we are such optimal programmers that any given program we write is so tight it is on the curve.
If a program is on the curve, any improvement in one dimension necessarily incurs a cost in the other dimension. In my experience, programs only get near any curve by being tuned, tweaked, hammered, waxed, and in general turned into "code golf". Most programs have plenty of room for improvement in all dimensions. Here's what I mean. Precisely because highly performing software components are generally orders of magnitude more complex than other software components all other things being equal.
Even then it is not as clear cut, if performance metrics are a critically important requirement then it is imperative that the design have complexity to accomodate such requirements. The danger is a developer who wastes a sprint on a relatively simple feature trying to squeeze a few extra milliseconds out of his component.
Regardless, complexity of design has a direct correlation with the ability of a developer to quickly learn and become familiar with such a design, and further modifications to functionality in a complex component can result in bugs that might not be caught by unit tests. With that being said it should be noted that a poorly performing software component could perform poorly just because it was foolishly written and unnecessarily complex based on the ignorance of the original author, making 8 database calls to build a single entity when just one would do, completely unnecessary code that results in a single code path regardless, etc These cases are more a matter of improving code quality and performance increases happening as a consequence of the refactor and NOT the intended consequence necessarily.
Assuming a well designed component however, it will always be less complex than a similarly well designed component tuned for performance all other things being equal. It is not so much that those things cannot coexist. The problem is that everyone's code is slow, unreadable, and unmaintainable on the first iteration. The rest of the time is spent working on improving whatever is most important. If that is performance, then go for it. Don't write spitefully awful code, but if it just has to be X fast, then make it X fast.
I believe that performance and cleanliness are basically uncorrelated. Performant code does not cause ugly code. However, If you spend your time tuning every bit of code to be fast, guess what you did not spend your time doing? Making your code clean and maintainable. So, performance and readability are but modestly related -- and in most cases, there's no real big incentives preferring the former over latter. And I am talking here about high level languages. In my opinion performance should be a consideration when it's an actual problem or e.
Not doing so tends to lead to microoptimizations, which might lead to more obfuscated code just to save a few microseconds here and there, which in turn leads to less maintainable and less readable code. Instead one should focus on the real bottlenecks of the system, if needed , and put emphasis on performance there. The point is not readability should always trump efficiency. If you know from the get go that your algorithm needs to be highly efficient, then it will be one of the factors you use to develop it.
The thing is most uses cases don't need blinding fast code. In many cases IO or user interaction causes much more delay then your algorithm execution causes.
The point is that you should not go out of your way to make some thing more efficient if you don't know it is the bottle neck. Optimizing code for performance often makes it more complicated because it generally involves doing things in a clever way, instead of the most intuitive.
More complicated code is harder to maintain and harder for other developers to pick-up both are costs that must be considered. At the same time, compilers are very good at optimizing common cases. It is possible that your attempt to improve a common case means that the compiler does not recognize the pattern anymore and thus can not help you make your code fast.
It should be noted that this does not mean write whatever you want without concern to performance. You should not be doing anything that is clearly inefficient. The point is to not worry about little things that might make things better.
Use a profiler and see that 1 what you have now is an issue and 2 what you changed it to was an improvement. I think most programmers get that gut feeling simply because most of the time, performance code is code based on a lot more informations about the context, hardware knowledge, global architecture than any other code in applications. Most code will only express some solutions to specific problems that are encapsulated in some abstractions in a modular way like functions and that means limiting the knowledge of the context to only what enter that encapsulation like function parameters.
When you write for high performance, after you fix any algorithmic optilizations, you get into details that requires far more knowledge about the context. That might naturally overwhelm any programmer that don't feel focused enough for the task. Because the cost of global warming from those extra CPU cycles scaled by hundreds of millions of PCs plus massive data center facilities and mediocre battery life on user's mobile devices , as required to run their poorly optimized code, rarely shows up on most programmer's performance or peer reviews.
It's an economic negative externality, similar to a form of ignored pollution. Hardware designers have been working hard adding power save and clock scaling features to the latest CPUs. It's up to programmers to let the hardware take advantage of these capabilities more often, by not chewing up every CPU clock cycle available. Then the cost of developing and maintaining the code became greater than the cost of the computers, so optimization fell way out of favor compared with programmer productivity.
Now, however, another cost is becoming greater than the cost of computers, the cost of powering and cooling all those data centers is now becoming greater than the cost of all the processors inside.
0コメント