Though I'll be the first to admit that I'm very inconsistent on this point, I do try to follow the best practice of having a single point of exit from each method.
Yes, you can find lots of places where I don't do this - but that's my bad in terms of consistency...
The real challenge here is that you can't tell whether there's any gain at all from what you propose. Even if you look at the IL and find that it didn't already optimize the code, the JIT compiler may very well optimize that code.
I still remember this program I wrote in VAX FORTRAN many, many years ago, which was entirely optimized away by the compiler. It managed to detect that I wasn't using the output of the app (which was true - it was a timing test for performance) and so it decided that none of the code in the app was needed and it generated just a single line of assembly output: .END
Though I'll be the first to admit that I'm very inconsistent on this point, I do try to follow the best practice of having a single point of exit from each method.
Yes, you can find lots of places where I don't do this - but that's my bad in terms of consistency...
The real challenge here is that you can't tell whether there's any gain at all from what you propose. Even if you look at the IL and find that it didn't already optimize the code, the JIT compiler may very well optimize that code.
I still remember this program I wrote in VAX FORTRAN many, many years ago, which was entirely optimized away by the compiler. It managed to detect that I wasn't using the output of the app (which was true - it was a timing test for performance) and so it decided that none of the code in the app was needed and it generated just a single line of assembly output: .END
Copyright (c) Marimer LLC