Be more emotional.
But not all the time. Use your intelligence to decide when. ;)
When these are given, is it because there is nothing more important to say?
Or, does such feedback come from people who don't understand the bigger picture or want a quick way to show they've looked at the code without having to take the time and mental effort to truly understand it?
Are you just repeating what's already been said before?
Where/when appropriate, are you correctly attributing others?
Does your comment add value?
Are you adding a new/different/unique perspective?
Have you read the other comments first?
Have you thought about who (possibly many people) will read the comment?
In the last few days MAUI App Accelerator passed ten thousand "official" unique installs.
This doesn't include the almost eight thousand installs included via the MAUI Essentials extension pack. (Installs via an extension pack are installed in a different way, which means they aren't included in the individual extension install count.)
While big numbers are nice (and apparently worth celebrating) I'm more interested in how it's used.
The numbers for that are lower, but still noteworthy.
It's currently used to create about 25 new apps each day. Which is nice.
I'm also trying to improve my ability to use App Insights so I can get other and better statistics too.
More updates are coming. Including the most potentially useful one...
I quite often use the phrase "I'm not smart enough to use this" when working with software tools.
This is actually a code for one or more of the following:
Do your users/customers ever say similar things?
Would they tell you?
Are you set up to hear them?
And ready to hear this?
Or will you tell me that I'm "holding it wrong"?
Should all the rules for formatting and structuring code used in automated tests always be the same as those used in the production code?
Of course, the answer is "it depends!"
I prefer my test methods to be as complete as possible. I don't want too many details hidden in "helper" methods, as this means the details of what's being tested get spread out.
As a broad generalization, I may have two helpers called from a test.
One to create the System Under Test.
And, one for any advanced assertions. These are usually to wrap multiple checks against complex objects or collections. I'll typically create these to provide more detailed (& specific) information if an assertion fails. (e.g. "These string arrays don't match" isn't very helpful. "The strings at index 12 are of different lengths" helps me identify where the difference is and what the problem may be much faster.)
A side-effect of this is that I may have lots of tests that call the same method. If the signature of that method needs to change, I *might* have to change it everywhere (all the tests) that call that method.
I could move some of these calls into other methods called by the tests and then only have to change the helpers, but I find this makes the tests harder to read on their own.
Instead, where possible, I create an overload of the changed method that uses the old signature, and which calls the new one.
If the old tests are still valid, we don't want to change them.
If the method signature has changed because of a new requirement, add new tests for the new requirements.
I'll be speaking at DDD South West later this month.
I'm one of the last sessions of the day. It's a slot I've not had before.
In general, if talking at part of a larger event, I try to include something to link back to earlier talks in the day. Unless I'm first, obviously.
At the end of a day of talks, where most attendees will have already heard five other talks that day, I'm wondering about including something to draw together threads from the earlier sessions and provide a conclusion that also ties in with what I'm talking about. I have a few ideas...
I've seen someone do a wonderful job of this before but it's not something I've ever heard mentioned in advice to (or books on) presenting... I guess if you'll there you'll see what I do.
The general "best-practice" guidance for code comments is that they should explain "Why the code is there, rather than what it does."
When code is generated by AI/LLMs (CoPilot and the like) via a prompt (rather than line completions), it can be beneficial to include the command (prompt) provided to the "AI". This is useful as the generated code isn't always as thoroughly reviewed as code written by a person. There may be aspects of it that aren't fully understood. It's better to be honest about this.
What you don't want is to come to some code in the future that doesn't fully work as expected, not be able to work out what it does, not understand why it was written that way originally, and for Copilot's explanation of the code to not be able to adequately explain the original intent.
// Here's some code. I don't fully understand it, but it seems to work.
// It was generated from the prompt: "..."
// The purpose of this code is ...
No, you don't always need that first line.
Maybe xdoc comments should include different sections.
"Summary" can be a bit vague.
Maybe we should have (up to) 3 sections in the comments on a class or method:
Writing a comment like this may require some bravery the first few times you write such a thing, but it could be invaluable in the future.
These are the numbers I care about.
The one most other people care about is 2,375,403. That's the number of views the articles have had.
But this isn't a post about statistics. This is a post about motivation and reward.
I started writing this blog for me.
That other people have read it and got something from it is a bonus.
If I were writing for other people, I would write about different topics, I would care about SEO and promotion, and I would have given up writing sooner.
I get lots of views each day on posts that I can't explain.
I know that most views of this blog come from "the long tail," and Google points people here because there is a lot of content. The fact that I've been posting for 17+ years also gives me a level of SEO credibility.
There have been periods where I have written very little. This is fine by me. By not forcing myself to publish on a particular schedule, the frequency of posting doesn't hold me back or force me to publish something for the sake of it.
I publish when and if I want to.,
Some people need and/or benefit from forcing themselves to publish on a regular schedule. If that works for you, great. If it doesn't, that's okay, too.
Others might think a multi-month gap in posting is bad, but if that's what I want or need, it's okay. Over a long enough period, the gaps are lost in the overall volume of posts.
I'm only interested in writing things that don't already exist anywhere else. This probably holds me back from getting more views than if that were my goal, but it probably helps me show up in the long tail of niche searches.
And yet, some people still regularly show up and read everything I write. Thank you. I'm glad you find it interesting.
Will I keep writing here? I can't say for certain but I have no plans on stopping.
I'm only publishing this post because I thought I might find it useful to reflect on all that I've written, and 1000 posts felt like a milestone worth noting, even if not fully celebrating. Originally, I thought I'd want to write lots about this, but upon starting it feels a bit too "meta" and self-reflective. I don't know what the benefit is of looking at the numbers. What I find beneficial is doing the thinking to get my ideas in order such that they make sense when written down. That's, primarily, why I write. :)
A large part of code quality and the use of conventions and standards to ensure its readability has long been considered important for the maintainability of code. But does it matter if "AI" is creating the code and can provide a more easily understandable description of it if we really need to read and understand it?
If we get good enough at defining/describing what the code should do, let "AI" create that code, and then we verify that it does do what it's supposed to do, does it matter how the code does whatever it does, or what the code looks like?
Probably not.
My first thought as a counterpoint to this was about the performance of the code. But that's easy to address with "AI":
"CoPilot, do the following:
- Create a benchmark test for the current code.
- Make the code execute faster while still ensuring all the tests still pass successfully.
- Create a new benchmark test for the time the code now takes.
- Report how much time is saved by the new version of the code.
- Report how much money that time-saving saves or makes for the business.
- Send details of the financial benefit to my boss."
Performance matters.
Sometimes.
In some cases.
It's really easy to get distracted by focusing on code performance.
It's easy to spend far too much time debating how to write code that executes a few milliseconds faster.
How do you determine/decide/measure whether it's worth discussing/debating/changing some code if the time spent thinking about, discussing, and then changing that code takes much more time than will be saved by the slightly more performant code?
Obviously, this depends on the code, where it runs, for how, long and how often it runs.
Is it worth a couple of hours of developer's time considering (and possibly making) changes that may only save each user a couple of seconds over the entire time they spend using the software?
What are you optimizing for?
How do you ensure developers are spending time focusing on what matters?
The performance of small pieces of code can be easy to measure.
The real productivity of developers is much harder to measure.
How do you balance getting people to focus on the hard things (that may also be difficult to quantify) and easy things to review/discuss (that they're often drawn to, and can look important from the outside) but don't actually "move the needle" in terms of shipping value or making the code easier to work with?
Visual Studio is having a UI refresh. In part, this is to make it more accessible.
I think this is a very good thing.
If you want to give feedback on another possible accessibility improvement, add your support, comments, and thoughts here.
Anyway, back to the current changes.
They include increasing the spacing between items in the menu.
There are some objections to this as it means that fewer items can be displayed at once.
Instead of complaining, I took this as an opportunity to revisit what I have displayed in the toolbar in my VS instances.
I used to have a lot there.
I knew that some of those things I didn't need and, in some cases, had never used. I just didn't want to go the the trouble of customising them.
"If they're there by default, it must be for a reason, right?" Or so I thought.
A better question is "Are they there for reasons I have?" In many cases, they weren't.
So I went through and spent (what turned out only to be) a few minutes customising the toolbars so they only contained (showed) the items I wanted, needed and used.
That was several weeks ago, and it has been a massive improvement.
A system change to improve things for others encouraged me to improve things for myself. I see that as a win-win.