Framework Design Guidelines


Microsoft has release its Framework Design Guidelines guidelines for designing libraries that extend, and interact with the .NET Framework. The goal is to help library designers ensure API consistency and ease of use by providing a unified programming model that is independent of the programming language used for development. They recommend you follow these design guidelines when developing classes and components that extend the .NET Framework because inconsistent library design adversely affects developer productivity and discourages adoption.

Continue reading “Framework Design Guidelines”

Running UAT and Integration Tests During a VSTS Build

There are lots of small suggestions I’ve learned from experience when it is time to create a suite of integration / UAT test for your project. A UAT or integration test is a test that exercises the entire application, sometimes composed of several services that are collaborating to create the final result. The difference from UAT tests and Integration test, in my personal terminology, is that the UAT uses direct automation of User Interface, while integration tests can skip the UI and exercise the system directly from public API (REST, MSMQ Commands, etc). Continue reading “Running UAT and Integration Tests During a VSTS Build”

How No-Code, Low-Code Tools Will Strengthen- and Disrupt- Enterprise App Development

No-code, low-code tools for helping people without a technical background write mobile apps are proliferating in enterprises, leading to the rise of so-called “citizen developers” – business analysts and domain experts who develop mobile apps without the help of IT. The tools have helped businesses roll out mobile apps quickly, cut costs, and ensure that the apps are useful to the greatest number of users. Continue reading “How No-Code, Low-Code Tools Will Strengthen- and Disrupt- Enterprise App Development”

Github’s Top Coding Languages Show Open Source Has Won

On Wednesday, Github published a graph tracking the popularity of various programming languages on its eponymous internet service, a tool that lets anyone store, edit, and collaborate on software code. In recent years, has become the primary means of housing open source software—code that’s freely available to the world at large; an increasing number of businesses are using the service for private code, as well. A look at how the languages that predominate on Github have changed over time is a look at how the software game is evolving.

In particular, the graph reveals just how much open source has grown in recent years. It shows that even technologies that grew up in the years before the recent open source boom are thriving in this new world order—that open source has spread well beyond the tools and the companies typically associated with the movement. Providing a quicker, cheaper, and more comprehensive way of building software, open source is now mainstream. And the mainstream is now open source.

“The previous generation of developers grew up in a world where there was a battle between closed source and open source,” says Github’s Ben Balter, who helped compile the graphic. “Today, that’s no longer true.”
Java Everywhere

Case in point: the Java programming language. A decade ago, Java was a language primarily used behind closed doors, something that big banks and other “enterprise” companies used to build all sorts of very geeky, very private stuff. But as GitHub’s data shows, it’s now at the forefront of languages used to build open source software.

Among new projects started on GitHub, Java is now the second-most popular programming language, up from seventh place in 2008; according to Balter, the increase is driven not by private code repositories but by public (open source) repos. Among private Github repos, he says, Java ranks seventh.
Why the shift? Java is so well suited to building massive internet services along the lines of Google, Twitter, LinkedIn, Tumblr, and Square, and the economics of the software business dictate that such services run on open source. As Balter also points out, Java’s rise is also a result of Google making it the primary language for building apps on Android phones and tablets.
The graph also shows a recent uptick for C#. C# is basically Microsoft’s version of Java; in years past, it was even more of a closed-source kind of thing. After all, it was overseen by Microsoft, a company that traditionally kept open source at bay. But as the influence of open source has grown, Microsoft has embraced the movement. It has even open sourced many of the tools used to build and run applications in C#.

Another language on the rise among Githubbers? Swift, Apple’s language for building apps on the iPhone, iPad, and the Mac (the language doesn’t show up in the graph, but in the raw data GitHub sent to WIRED, it now ranks art number 18 on the list). The reasons for this are different. Swift is on the rise because it’s brand new and it’s designed for the world’s most popular smartphone. But its presence is another nod to growing importance of open source.

Unlike with its previous operating system, you see, Apple has said it will open source Swift, letting anyone modify it so that it will run on more than just the iPhone and the iPad. When Apple opens up, you’ll know the world has changed indeed.

Rainbird and Microsoft Cognitive Services

During a three-day hack-a-thon in April 2017, we experimented with integrating a range of Microsoft Cognitive Services with the Rainbird Platform. Using a demo Knowledge Map built in Rainbird referred to as the Hamilton demo, we used Microsoft Cognitive Services to produce bots in Skype and Slack. The objective of the Hamilton demo is to enable Bank Account recommendation through rules generated by banking experts. More details on the Rainbird platform and the Hamilton demo are available through the links under the Resources section further down.
In addition, we deployed a Rainbird environment on the Azure cloud-hosted platform. A Rainbird environment consists of database instances, an inferencing engine, and the expert modeling tool. Each required element is available as a docker image and therefore simple to deploy.
Tools Used During Hack-a-Thon
• Microsoft Bot Framework.
• Microsoft Cognitive Services:
◦ QnA Maker.
• Rainbird Platform.
Rainbird’s power is best demonstrated by taking a complex decision-making routine generally performed by teams of people and automating that decision-making process in a tool for others to use. Very often, a customer’s requirement encompasses this scenario along with a need to answer basic questions. Microsoft’s QnA Maker extracts frequently asked questions and answers, making them accessible through its API. Additionally, customers frequently ask about Rainbird bot integration; therefore, we have used Microsoft’s Bot Framework to provide a front end in both Slack and Skype. Finally, we integrated LUIS, Microsoft’s Natural Language Processing (NLP) tool, to identify the user’s free-text input intent from which the application routes requests through QnA Maker or Rainbird.
QnA Maker
Solves simple question answer scenarios.
Rainbird Platform
Complex decision making through extended consultation (multiple questions and answers).
Bot Framework
Distribute implementation to various bot platforms.
Identify user intent from free text entry.
See how we did it.
In summary, the user’s initial request is passed to LUIS to identify whether the request for information should be directed to QnA Maker or Rainbird. When QnA Maker is identified as the route, the API is contacted and the answer communicated back to the user. When Rainbird is identified as the route, an iterative Q&A consultation is conducted to solve the user’s original query. The consultation is directed to aid the interaction using the attributes of the median (Skype, Slack, etc.) as illustrated below:

The solution’s code is publicly accessible here.
Testing the Solution
See how we tested the solution with QnA Maker and Rainbird.
QnA Maker
In QnA Maker, we defined a handful of questions and associated answers as shown below:
Do you have business advisors?
We have a team of qualified business advisors, ready to work with you on new or existing business ventures.
Do you have any cash machines?
We have three 24 hour cash machines – ready to expel money at a moments notice.
How many counters do you have?
We have 5 general customer service counters that are all open during opening hours.
In Rainbird, we defined a single complex decision task to recommend a suitable bank account. You can ask “Please recommend me a bank account” or a variance of this request as LUIS will determine the intent. Our solution will then interact with Rainbird asking a series of questions before making a recommendation. The final recommendation is supplemented with a graphical representation of the path taken, the Rainbird Evidence Tree.

Try It Out
The Microsoft Bot Framework by default generates Web Chat and Skype implementations.
Click here to try out the final solution in the web chat.
Step-by-Step Guide
Here’s a quick explanation to assist reproducing the solution. To use the Microsoft services, you will need to sign up to each of them:
The solution requires a number of environment variables associated with the above services and Rainbird. When deploying your bot in the Azure Infrastructure, use the Azure portal to enter the required environment variables.
Variable Name
Created during Bot registration at
Created during Bot registration at
Rainbird host used to in Evidence Tree links
Rainbird API URL
Rainbird API key
Rainbird knowledge map ID
Subject value of Rainbird query
Relationship used in the Rainbird query
LUIS account ID
LUIS application key
QnA Maker account ID
QnA Maker application key
The application’s code is primarily in app.js. We’ve additionally developed Metaintent.js, a wrapper to handle the user’s input so we can determine the best tool to solve their question.
Firstly, in app.js, we create a server using restify.js as a means of easily capturing the REST API requests in our application and configure this to listen on the desired port:
server.listen(process.env.port || process.env.PORT || 3978, function () {
console.log(‘%s listening to %s’,, server.url);
Our setup continues with the Microsoft Bot Framework dependency, botbuilder, from which we can retrieve a universal bot instance used to navigate our conversation through dialogs.
Our attention then turns to configuring our connection to Rainbird. Below, we have informed the Rainbird API (URL) of our API key and the ID of the Knowledge Map, thus identifying ourselves and the knowledge map we wish to query.
Our Rainbird knowledge map is able to run a single query, which is ‘recommend a bank account’. Therefore, we have preconfigured the query JSON to this effect using environment variables.
Our application’s initial setup is completed through the use of Microsoft’s universal bot connector, which instructs our application to listen for messages.
Click here to learn more about the default message handler.
Besides a number of helper functions designed to handle the Rainbird interactions, there are seven bot.dialog() handlers. The default message handler uses dialog to control the conversation flow. See here for more details.
The root dialog handler below decides on one of three courses of action based on the result of calling metaIntent.process(). If you recall we mentioned that the metaIntent.js was used to determine the path to take. Here we pass the user’s input through to our helper in metaIntent.js where the content is used to identify if we are asking a simple Question to be handled by Microsoft’s QnA Maker or if we are making a Rainbird request to recommend a bank account based on a series of questions and answers. As a fail safe, we check for neither action and pass back a ‘Sorry didn’t understand…’ message.
Before we run the code to determine the path to take, we execute the line session.sendTyping(). This line tells the bot to inform our user that we are typing.
To help capture variations to text indicating the need to run Rainbird’s ‘Recommend me a bank account’ request, we filter the user’s input through a trained LUIS implementation. To understand this, let’s turn our attentions to the metaintent.js content. Beginning with the processTextLuis() function, we can see that LUIS is informed of the user’s input. If the response from LUIS exceeds a high enough score, we return the first element in the array received from LUIS (remember we only have one intent).
The function exposed to our main app.js code attempts the LUIS intent match using the above function before evaluating the correct path to take. Assuming no error is returned from processTextLuis(), a result object informs our calling code to take the Rainbird route; otherwise, we attempt the QnA maker route.
Our processTextQnA() function contacts Microsoft’s QnA Maker with a formed URL, JSON body containing the user’s text, and our QnA Maker credentials (defined in environment variables). Upon the success of a QnA Maker match, the associated answer forms our result object and is passed back to our processText() function and subsequently our calling function in app.js. Again note we check for a high certainty before confirming a match response.body.score > 80.
Returning to our calling code in app.js, we can see where we begin the more complex task of processing the Rainbird query by redirecting to our /prestart dialog.
The /prestart dialog checks for the existence of a running Rainbird conversation and redirects to our main rainbird loop dialog when one exists (/rbloop). Otherwise, we start a conversation by redirecting to the /start dialog.
The /start dialog as the name suggests starts our Rainbird conversation. This is achieved through two steps. The first calls the Rainbird API start endpoint where we use our Rainbird credentials (API key and Knowledge Map ID) to authenticate. Next, we call the query endpoint to instruct Rainbird to start the ‘recommend me a bank account’ predefined query.
Note that we’ve used a publicly available dependency to help abstract away much of the Rainbird Rest API syntax. See GitHub for details.
Once complete, our /start dialog redirects to the /rbloop dialog. Essentially, all Rainbird paths lead to the /rbloop dialog where we iterate through questions until Rainbird has sufficient knowledge to return result(s) relating to the original query — that is, unless Rainbird exhausts all possibilities and concludes it is unable to answer the user’s query.
Finally, let’s examine the /rbloop dialog that controls much of the Rainbird discussion flow.
Before we look into the two functions in our array, let’s take a look at the last function, cancelAction(). Once a user has entered into a Rainbird conversation their input will always get redirected to the /rbloop dialog. With this function in place, if the user wishes to exit this line of conversation, they can do so by typing ‘restart’. The function removes the conversation data from the session and returns the message ‘No problem, how else can I help you?’. Now the user is free to enter into a new line of conversation.
Now let’s look at the functions in the array. In our first function, we check the conversation data to see if Rainbird has a question for the user. If so, then we call sendRBQuestion().
Our sendRBQuestion() function examines the question content received from Rainbird to help form a suitably illustrated question, i.e. a question with multiple choice options. We’ve used builder.Prompts() to present the question to our bot user.
Our second function handles the bot user’s response to a question. Taking this response, we contact Rainbird again, this time via the yolandaResponse() function, which calls the response Rainbird API endpoint. Similarly to when we called the query, we consider the result of calling the endpoint to determine if we need to ask another question by redirecting to/rbloop dialog or if we have a result to our original request.
When the Rainbird response is not a question, we process the response as an answer to our query. First, we call sendRBResult() , which constructs our message to inform the user of the query results. We then clear out our conversation to allow future Rainbird queries to run without knowledge of the answers given in this conversation, and finally, we complete our clean up by instructing our bot to endDialog().
In the above sendRBResults(), we construct a link to Rainbird’s Evidence Tree, a visual representation of the path Rainbird took to derive the associated answer. An optional display which in some industries is essential for compliance.
To complete our analysis of the solution, let’s look at the code used by Skype and Web Chat to present a welcome message. Note this feature is not available in Slack.
As you can see, it’s possible to handle bot specific events. Here, we have demonstrated handling both Web Chat and Skype specific events to present the user with a ‘welcome’ message when they first enter the bot.
Our final step of the hack was to configure and deploy our bot implementation in Azure. We followed this blog to guide us through the deploy process.
This demo produced in the three-day hack represents a starting point that other implementations can quite easily build on inline with specific requirements. You could go on to add additional Rainbird queries identified through further training in LUIS. Expand the range of QnA Maker questions and even configure multiple Rainbird knowledge maps. A solution that is scalable yet flexible.
Technical limitations such as the handling of the restart command could be improved on perhaps by supporting multiple conversations simultaneously.
The bot framework through Azure makes it easy to roll out to other bots (referred to as channels in Azure), by default Skype and Web Chat are selected, but adding others is as easy as signing up, selecting the channel and configuring with your credentials. This is all achieved in the bot framework portal. The array of available channels is extensive; Bing, Cortana, Facebook Messenger, Kik, and Slack, as well as several others.
Our only stumbling block came from the bots inability to handle plural answers when presenting a multiple choice question. When the bot presented options the functionality followed radio button behavior, whereas our use case would have favored checkbox behavior. Having spoken with Microsoft, they assure us this is being worked on and we look forward to seeing this feature in a future release.
• Additionally we found this resource useful when bringing together the tools.

Should Software Developers Be Generalists or Specialists?

Reduce testing time & get feedback faster through automation. Read the Benefits of Parallel Testing, brought to you in partnership with Sauce Labs.
I don’t even need an outline to write this chapter.
Of all the topics I talk about, this is perhaps one of the most exhausted; continually asked about, questioned, and evaluated.
I’ve talked about this topic so much that I dedicated an entire YouTube playlist to all the videos I’ve done on the topic, and that list is continually growing.
What am I talking about?
The age-old debate of whether or not you should become a specialist or a generalist.
Should you become a “jack-of-all-trades” and a “full-stack developer,” or should you specialize in one or two areas of software development and “go deep?”
Well, it turns out this is sort of a false dichotomy.
The real answer is both.
Let’s find out why.
The Power Of Specialization

Before we get into the debate, I want to start off by showing you just how important and beneficial specialization is.
Let’s suppose that you were on trial for a murder. Yes, a murder. You didn’t do it—I know you didn’t—but you still need to prove your innocence. What do you do? Do you hire a lawyer who is good at tax law, divorce law, real estate law and criminal law? Or, do you hire a lawyer who specializes in criminal law, specifically defending people who are convicted of murder?
I don’t know about you, but if the rest of my life is on the line, I’m going to choose the specialist every time.
Many people say they want or value a generalist, and they think they do, but when it comes down to it, they pick a specialist every time.
I’ll give you another example. I wanted to get some crown molding done throughout my house. Anyway, I was looking for carpenters or contractors to do the crown molding when I came across this one company who specialized in crown molding. In fact, the name of their company was Kings of Crown. All they did was install crown molding. That is all they did. Who do you think I chose?
Did I want to take a chance on a carpenter or contractor who did some crown molding, or did I want to call the crown molding “experts” and get the sure thing?
That’s not to say there isn’t any value in having a broad base of knowledge or being a generalist to some degree (there are times when I’m looking for a general handyman), but it is extremely valuable to be a specialist of some kind—or at least to market yourself that way.
Think about it this way. Do you think that fictional murder trial lawyer knows about other areas of law other than murder trials? Of course, he does. He might actually be pretty good in multiple areas of law and have knowledge in several fields. But he advertises himself as a murder lawyer because he understands the power of specialization.
The same for those crown molding guys. Don’t you think they could probably handle other carpentry jobs? Of course, they could, but they choose to specialize because it’s much more profitable to do so.
In Order To Specialize, You Have To Have A Broad Base

One thing that many software developers don’t understand is that
just about all specialists are also generalists, but no generalists are specialists.
What do I mean by this?
I mean that, usually, in order to acquire the skills of a specialist, a great deal of general knowledge is required and accumulated along the way.
It’s very difficult to be a good specialist without also building a broad base of general knowledge about your field.
My brother-in-law is studying to become an oral surgeon. In order for him to do that, he had to first go through dental school and become a dentist. Now, he’s not going to be doing general dentistry very often, but to him, filling a cavity or doing some general dentistry work is cake. He’s probably better than most generalist dentists, simply because he had to learn all that and more in order to become an oral surgeon.
That doesn’t mean that every specialist is a good generalist, or that they keep their skills up to date, but in general (haha) you’ll find most specialists generally do. (How do you like that sentence?)
This is all to say that specializing does not preclude you from being a generalist also, it just gives you more options and makes you more valuable.
It’s All About the T-shaped Knowledge
What you really want to strive for is what is known as T-shaped knowledge.
It means that you have a broad base of knowledge in your field, and then you have at least one area of deep, specialized knowledge or skill.
As a software developer, you should strive to be well-versed in best practices, algorithms, data structures, different architectures, front-end, back-end, databases, etc.
But, you should also pick at least one area where you are going to go deep.
You need to pick some specialization that will set you apart from the masses and greatly increase your value. When you build up your personal brand and market yourself, you are going to use this specialization to do it. If you want to make waves, you need a small enough pond. In the HUGE pond of software development, being a generalist will make it more difficult to even make ripples, at least at first.
So, yes, work on being a well-rounded software developer. Develop a broad base of knowledge and grow that base, little by little, year after year. But also pick some specialization that you will dive into and become a master of. Eventually, you can even have “comb-shaped knowledge,” where you have multiple deep specialties, like Elon Musk.
But start with one.
But Everyone Says They Are Looking for Generalists
I know, I know, every job description says that they are looking for good software developers who can wear many hats or work with the “full stack” or can be a jack-of-all-trades. They want you to possess every skill under the sun. It’s all a lie, I tell you. A big fat lie.
I guarantee you that if you have the exact skills that are required for a job and if you are an expert in the framework or technology that company is using, they are going to be much more likely to hire you than a generalist.
What companies are really saying when they say they want someone who is a generalist, is that they want someone who is adaptable and can learn quickly.
The fear is that they’ll hire someone who can only do one thing, so they try to safeguard against that by making the job description state that experience with their framework or technology stack is not necessary, even though that’s not completely true.
Don’t get me wrong, it’s not an intentional lie. I do believe hiring managers honestly think they want generalists, but like I said, what they really want is someone who is versatile and flexible. You can still be that and be a specialist. And like I said before, your best bet is to sort of be both.
Get that T-shaped knowledge so that you do have a broad base, but go deep in one area so that you can be the expert in the exact technology or skill set that matches the job you apply for.
You Can’t Even Be a Generalist Today
It’s not really possible. The field of software development and technology is so large and changing so rapidly, that you can’t know it all. Yes, you can have a broad base of knowledge. Yes, you can understand fundamental principles. But, no, you just can’t understand enough about everything that exists out there to really call yourself a generalist anymore.
Even if you are a “full stack” developer, you are going to have to pick a stack or two. You can’t know them all and be effective by any real measure. It’s not just computer science and programming where this phenomenon is occurring either. Every major profession is moving towards more and more value on specialization.
Consider how large medicine is today. Generalist doctors have trouble diagnosing underlying illnesses and problems because there are just too many possibilities.
Accountants, lawyers, financial analysts, and just about every kind of engineer have to specialize to be effective because knowledge domains are growing to such large extents.
But What if I Specialize in The Wrong Thing?

Then specialize in something else. It’s not that big of a deal. One of my good friends, John Papa, specialized in a Microsoft technology called Silverlight. And then Silverlight was axed by Microsoft and it’s now as dead as a doorknob. But did John throw his hands up in the air, give up, and decide to live in his car? No. Because he was already a specialist, he had built up a reputation and a following.
He just shifted and pivoted to another specialty that was closely related.
Now John is a specialist at developer SPA (Single Page Applications), and he’s doing even better than he was before.
Far too many software developers I talk to are so afraid of picking the wrong thing to specialize in that they don’t specialize in anything. They remain stagnant in their careers for years, paralyzed by fear, always considering the “what ifs.” Don’t do that; just pick something and go with it.
It’s a much better choice than doing nothing, and you can always change course and switch directions later on if you need to. Plus, you’ll find that once you learn how to go deep into one specialization, the next one is much easier. Many skills that don’t seem transferable are, and developing the ability to “go deep” is valuable in itself.
So, What Should You Do?
Regardless of where you are in your career, pick some kind of specialization to pursue.
Don’t worry if it’s not the “right” or the final one.
Start with one, build your personal brand around it, and decide to go deep.
Err on the side of picking something too small and specific rather than too broad. Don’t be a C# developer: be a C# developer specializing in a specific C# framework, or technology or even technology stack. Try to go as small and detailed as you can. You can always branch out and expand later.
My friend Adrian Rosebrock is a very successful software developer and entrepreneur who specializes in a specific Python library for computer vision. You wouldn’t believe how successful he has been with this particular niche, even though it’s extremely small and focused.
At the same time, work on building up your general knowledge of software development—your broad base. Learn how to write good code. Learn about the underlying principles and technologies that may manifest themselves in many ways, but really are never changing at the core.
You either want to learn things that are deeply focused and directly in your specialty or broad enough to be widely applicable and somewhat timeless. Don’t try and learn a bunch of different programming languages and frameworks that you will likely never use.
Following this approach, you’ll set yourself apart and set yourself up for success.
The Agile Zone is brought to you in partnership with Sauce Labs. Discover how to optimize your DevOps workflows with our cloud-based automated testing infrastructure.

GhostDoc Pro Beta brings true Visual Editing to XML Comments If You Automate Your Tests, Automate Your Code Review

The latest beta of GhostDoc introduces a new feature that makes XML documentation authoring a breeze: visual editing for XML comments.
The number one challenge authoring XML comments, according to our users, has been keeping the valid XML that includes encoding HTML formatting tags, code samples, etc. Many of these don’t even show up until the help docs are generated.

And now we offer a solution to this.
Visual editing makes it painless to insert tables, lists, pictures, links, source code samples, and other formatting directly in your XML doc comments. (Most of these are included with the beta, while few are coming in the final version.)
This is a huge leap from plain XML comment maintenance. Now you have WYSIWYG editing and don’t have to worry about valid and compliant XML in your comments.
There is no longer any need to loop up the correct syntax of the XML comment section tags and format to use.
The visual comment editor allows you to create and edit comments directly within an editable preview of the generated documentation. Comments created with the visual editor are written back to your source code in standard XML format, properly encoded, when required.
You still can have GhostDoc auto-generate your XML doc template in the WYSIWYG editor. Then you have a Word-like experience to continue editing your documentation to your satisfaction.
The screen divides your help documentation into editable areas. The editable areas represent the sections of your help docs — summary, return value, etc. While a new comment includes the base set of sections, you can use the “Add…” button on the toolbar for additions, or you can delete unwanted sections.
Ready to try the new beta features?

For the last several years, I’ve made more and more of my living via entrepreneurial pursuits. I started my career as a software developer and then worked my way along that career path before leaving fulltime employment to do my own thing. These days, I consult, but I also make training content, write books, and offer productized services.
When you start to sell things yourself, you come to appreciate the value of marketing. As a techie, this feels a little weird to say, but here we are. When you have something of value to offer, marketing helps you make interested parties aware of your offer. I think you’d like this and find it worth your money, if you gave it a shot.
In pursuit of marketing, you can use all manner of techniques. But today, I’ll focus on a subtle one that involves generating a good reputation with those who do buy your products. I want to talk about making good documentation.
The Marketing Importance of Documentation
This probably seems an odd choice for a marketing discussion. After all, most of us think of marketing as what we do before a purchase to convince customers to make that purchase. But repeat business from customer loyalty counts for a lot. Your loyal customers provide recurring revenue and, if they love their experience, they may evangelize for your brand.
Providing really great documentation makes an incredible difference for your product. I say this because it can mean the difference between frustration and quick, easy wins for your user base. And, from a marketing perspective, which do you think makes them more likely to evangelize? Put yourself in their shoes. Would you recommend something hard to figure out?
For a product with software developers as an end user, software documentation can really go a long way. And with something like GhostDoc’s “build help documentation” feature, you can notch this victory quite easily. But the fact that you can generate that documentation isn’t what I want to talk about today, specifically.
Instead, I want to talk about going the extra mile by customizing it.
Introducing “Conceptual Content”
You can easily generate documentation for your API with the click of a button. But you can also do a lot more.
GhostDoc Enterprise features something called “Conceptual Content.” Basically, it allows you to customize and add on to what the engine generates using your code and XML doc comments. This comes in handy in ways limited only by your imagination, but here are some ideas.
• A welcome page.
• A support page.
• A “what’s new” page.
• Including a EULA/license.
• Custom branding.
You probably get the idea. If you already look to provide documentation for your users, you no doubt have some good additional thoughts for what they might value. Today, I’m going to show you the simplest way to get going with conceptual content so that you can execute on these ideas of yours.
How It Works at a High Level
For GhostDoc to work its documentation-generating magic, it creates a file in your solution directory named after your solution. For instance, with my ChessTDD solution, it generates a file called “ChessTDD.sln.GhostDoc.xml.” If you crack open this file, you will see settings mirroring the ones you select in Visual Studio when using GhostDoc’s “Build Help Documentation.”
To get this going, we face the task of basically telling this file about custom content that we will create. First, close out of Visual Studio, and let’s get to work hacking at this thing a bit. We’re going to add a simple, text-based welcome page to the standard help documentation. To do this, we need the following things.
• Modifications to the GhostDoc XML file.
• The addition of a “content” file describing our custom content,
• Addition of a “content” folder containing the AML files that make up the actual, custom pages.
Let’s get started.
The Nuts and Bolts
First, open up the main GhostDoc XML file and look for the “ConceptualContent” section. It looks like this.

In essence, this says, “no conceptual content here.” We need to change that. So, replace the empty ContentLayout entry with this (substituting the name of your solution for “ChessTDD” if you want to follow along with your own instead of my ChessTDD code.)

Next up, you need to create the file you just told it about, ChessTDD.content. This file goes in the same directory as your solution and looks like this.

For the ID, I simply generated a GUI using this site. This ID simply needs to be unique, and to match the next file that we’ll create. Next up, create the folder you told ContentLayout about called, “Content.” Then add the file Welcome.aml to that folder, with the following text.

Welcome to our help section!

Notice that we use the same GUID here as in the content file. We do this in order to link the two.
Let’s Give it a Whirl
With your marked up Ghost Doc XML file, the new content file, and the new Content folder and welcome AML file, you can now re-launch Visual Studio. Open the solution and navigate through GhostDoc to generate the help documentation CHM file.

There you have it. Now you can quickly add a page to the automatically generated help documentation.
Keep in mind that I did the absolute, dead simplest possible thing I could do for demonstration purposes. You can do much more. For example:
• Adding images/media to the pages.
• Have cross-links in there for reference.
• Add snippets and examples.
• Build lists and tables.
As I said earlier, you’ll no doubt think of all manner of things to please your user base with this documentation. I suggest getting in there, making it your own, and leaving a nice, personal touch on things for them. When it comes to providing a good user experience, a little can go a long way.
Learn how GhostDoc can help to simplify your XML Comments, produce and maintain quality help documentation.

Today, I’d like to tackle a subject that inspires ambivalence in me. Specifically, I mean the subject of automated text generation (including a common, specific flavor: code generation).
If you haven’t encountered this before, consider a common example. When you file->new->(console) project, Visual Studio generates a Program.cs file. This file contains standard includes, a program class, and a public static void method called “Main.” Conceptually, you just triggered text (and code) generation.
Many schemes exist for doing this. Really, you just need a templating scheme and some kind of processing engine to make it happen. Think of ASP MVC, for instance. You write markup sprinkled with interpreted variables (i.e. Razor), and your controller object processes that and spits out pure HTML to return as the response. PHP and other server side scripting constructs operate this way and so do code/text generators.
However, I’d like to narrow the focus to a specific case: T4 templates. You can use this powerful construct to generate all manner of text. But use discretion, because you can also use this powerful construct to make a huge mess. I wrote a post about the potential perils some years back, but suffice it to say that you should take care not to automate and speed up copy and paste programming. Make sure your case for use makes sense.
The Very Basics
With the obligatory disclaimer out of the way, let’s get down to brass tacks. I’ll offer a lightning fast getting started primer.
Open some kind of playpen project in Visual Studio, and add a new item. You can find the item in question under the “General” heading as “Text Template.”

Give it a name. For instance, I called mine “sample” while writing this post. Once you do that, you will see it show up in the root directory of your project as Here is the text that it contains.
<#@ template debug="false" hostspecific="false" language="C#" #>
<#@ assembly name="System.Core" #>
<#@ import namespace="System.Linq" #>
<#@ import namespace="System.Text" #>
<#@ import namespace="System.Collections.Generic" #>
<#@ output extension=".txt" #>
Save this file. When you do so, Visual Studio will prompt you with a message about potentially harming your computer, so something must be happening behind the scenes, right? Indeed, something has happened. You have generated the output of the T4 generation process. And you can see it by expanding the caret next to your file as shown here.
If you open the Sample.txt file, however, you will find it empty. That’s because we haven’t done anything interesting yet. Add a new line with the text “hello world” to the bottom of the file and then save. (And feel free to get rid of that message about harming your computer by opting out, if you want). You will now see a new Sample.txt file containing the words “hello world.”
Beyond the Trivial
While you might find it satisfying to get going, what we’ve done so far could be accomplished with file copy. Let’s take advantage of T4 templating in earnest. First up, observe what happens when you change the output extension. Make it something like .blah and observe that saving results in Sample.blah. As you can see, there’s more going on than simple text duplication. But let’s do something more interesting.
Update your file to contain the following text and then click save.
<#@ template debug="false" hostspecific="false" language="C#" #>
<#@ assembly name="System.Core" #>
<#@ import namespace="System.Linq" #>
<#@ import namespace="System.Text" #>
<#@ import namespace="System.Collections.Generic" #>
<#@ output extension=".txt" #>
<# for(int i = 0; i < 10; i++) WriteLine($"Hello World {i}"); #>
When you open Sample.txt, you will see the following.
Hello World 0
Hello World 1
Hello World 2
Hello World 3
Hello World 4
Hello World 5
Hello World 6
Hello World 7
Hello World 8
Hello World 9
Pretty neat, huh? You’ve used the <# #> tokens to surround first class C# that you can use to generate text. I imagine you can see the potential here.
Oh, and what happens when you type malformed C#? Remove the semicolon and see for yourself. Yes, Visual Studio offers you feedback about bad T4 template files.
Use Cases
I’ll stop here with the T4 tutorial. After all, I aimed only to provide an introduction. And I think that part of any true introduction involves explaining where and how the subject might prove useful to readers. So where do people reasonably use these things?
Perhaps the most common usage scenario pertains to ORMs and the so-called impedance mismatch problem. People create code generation schemes that examine databases and spit out source code that matches with them. This approach spares the significant performance hit of some kind of runtime scheme for figuring this out, but without forcing tedious typing on dev teams. Entity Framework makes use of T4 templates.
I have seen other uses as well, however. Perhaps your organization puts involved XML configuration files into any new projects and you want to generate these without copy and paste. Or, perhaps you need to replace an expensive reflection/runtime scheme for performance reasons. Maybe you have a good bit of layering boilerplate and object mapping to do. Really, the sky is the limit here, but always bear in mind the caveat that I offered at the beginning of this post. Take care not to let code/text generation be a crutch for cranking out anti-patterns more rapidly.
The GhostDoc Use Case
I will close by offering a tie-in with the GhostDoc offering as the final use case. If you use GhostDoc to generate comments for methods and types in your codebase, you should know that you can customize the default generations using T4 templates. (As an aside, I consider this a perfect use case for templating — a software vendor offering a product to developers that assists them with writing code.)
If you open GhostDoc’s options pane and navigate to “Rules” you will see the following screen. Double clicking any of the templates will give you the option to edit them, customizing as you see fit.

You can thus do simple things, like adding some copyright boilerplate, for instance. Or you could really dive into the weeds of the commenting engine to customize to your heart’s content (be careful here, though). You can exert a great deal of control.
T4 templates offer you power and can make your life easier when used judiciously. They’re definitely a tool worth having in your tool belt. And, if you make use of GhostDoc, this is doubly true.
Learn more about how GhostDoc can help simplify your XML Comments, produce and maintain quality help documentation.

If you write software, the term “feedback loop” might have made its way into your vocabulary. It charts a slightly indirect route from its conception and into the developer lexicon, though, so let’s start with the term’s origin. A feedback loop in general systems uses its output as one of its inputs.
Kind of vague, huh? I’ll clarify with an example. I’m actually writing this post from a hotel room, so I can see the air conditioner from my seat. Charlotte, North Carolina, my temporary home, boasts some pretty steamy weather this time of year, so I’m giving the machine a workout. Its LED display reads 70 Fahrenheit, and it’s cranking to make that happen.
When the AC unit hits exactly 70 degrees, as measured by its thermostat, it will take a break. But as soon as the thermostat starts inching toward 71, it will turn itself back on and start working again. Such is the Sisyphean struggle of climate control.
Important for us here, though, is the mechanics of this system. The AC unit alters the temperature in the room (its output). But it also uses the temperature in the room as input (if < 71, do nothing, else cool the room). Climate control in buildings operates via feedback loop. Appropriating the Term for Software Development It takes a bit of a cognitive leap to think of your own tradecraft in terms of feedback loops. Most likely this happens because you become part of the system. Most people find it harder to reason about things from within. In software development, you complete the loop. You write code, the compiler builds it, the OS runs it, you observe the result, and decide what to do to the code next. The output of that system becomes the input to drive the next round. If you have heard the term before, you’ve probably also heard the term “tightening the feedback loop.” Whether or not you’ve heard it, what people mean by this is reducing the cycle time of the aforementioned system. People throwing that term around look to streamline the write->build->run->write again process.
A History of Developer Feedback Loops
At the risk of sounding like a grizzled old codger, let me digress for a moment to talk about feedback loop history. Long before my time came the punched card era. Without belaboring the point, I’ll say that this feedback loop would astound you, the modern software developer.
Programmers would sit at key punch “kiosks”, used to physically perforate forms (one mistake, and you’d start over). They would then take these forms and have operators turn them into cards, stacks of which they would hold onto. Next, they’d wait in line to feed these cards into the machines, which acted as a runtime interpreter. Often, they would have to wait up to 24 hours to see the output of what they had done.
Can you imagine? Write a bit of code, then wait for 24 hours to see if it worked. With a feedback loop this loose, you can bet that checking and re-checking steps received hyper-optimization.

When I went to college and started my programming career, these days had long passed. But that doesn’t mean my early days didn’t involve a good bit of downtime. I can recall modifying C files in projects I worked, and then waiting up to an hour for the code to build and run, depending what I had changed. xkcd immortalized this issue nearly 10 years ago, in one of its most popular comics.
Today, you don’t see this as much, though certainly, you could find some legacy codebases or juggernauts that took a while to build. Tooling, technique, modern hardware and architectural approaches all combine to minimize this problem via tighter feedback loops.
The Worst Feedback Loop
I have a hypothesis. I believe that a specific amount of time exists for each person that represents the absolute, least-optimal amount of time for work feedback. For me, it’s about 40 seconds.
If I make some changes to something and see immediate results, then great. Beyond immediacy, my impatience kicks in. I stare at the thing, I tap impatiently, I might even hit it a little, knowing no good will come. But after about 40 seconds, I simply switch my attention elsewhere.
Now, if I know the wait time will be longer than 40 seconds, I may develop some plan. I might pipeline my work, or carve out some other tasks with which I can be productive while waiting. If for instance, I can get feedback on something every 10 minutes, I’ll kick it off, do some household chores, periodically checking on it.
But, at 40 seconds, it resides in some kind of middle limbo, preventing any semblance of productivity. I kick it off and check twitter. 40 seconds turns into 5 minutes when someone posts a link to some cool astronomy site. I check back, forget what I did, and then remember. I try again and wait 40 seconds. This time, I look at a Buzzfeed article and waste 10 minutes as that turns into 4 Buzzfeed articles. I then hate myself.
The Importance of Tightening
Why do I offer this story about my most sub-optimal feedback period? To demonstrate the importance of diligence in tightening the loop. Wasting a few seconds while waiting hinders you. But waiting enough seconds to distract you with other things slaughters your productivity.
With software development, you can get into a state of what I’ve heard described as “flow.” In a state of flow, the feedback loop creates harmony in what you’re doing. You make adjustments, get quick feedback, feel encouraged and productive, which promotes more concentration, more feedback, and more productivity. You discover a virtuous circle.
But just the slightest dropoff in the loop pops that bubble. And, another dropoff from there (e.g. to 40 seconds for me) can render you borderline-useless. So much of your professional performance rides on keeping the loop tight.
Tighten Your Loop Further
Modern tooling offers so many options for you. Many IDEs will perform speculative compilation or interpretation as you code, making builds much faster. GUI components can be rendered as you work, allowing you to see changes in real time as you alter the markup. Unit tests slice your code into discrete, separately evaluated components, and continuous testing tools provide pass/fail feedback as you type. Static code analysis tools offer you code review as you work, rather than at some code review days later. I could go on.
The general idea here is that you should constantly seek ways to tune your day to day work. Keep your eyes out for tools that speed up your feedback loop. Read blogs and go to user groups. Watch your coworkers for tips and tricks. Claw, scratch, and grapple your way to shaving time off of your feedback loop.
We’ve come a long way from punch cards and sword fights while code compiles. But, in 10 or 30 years, we’ll look back in amazement at how archaic our current techniques seem. Put yourself at the forefront of that curve, and you’ll distinguish yourself as a developer.
Learn more how CodeIt.Right can tighten the feedback loop and improve your code quality.

First, we want to thank all of you for the support and loyalty you have given us over the last few years. We truly have the most amazing and passionate community of developers on the planet, and it makes our job an absolute joy. If you already have a GhostDoc Pro with License Protection, rejoice! The upcoming changes are not going to affect you (and if you were thinking about purchasing additional licenses, now is the time).
If you don’t have GhostDoc Pro, this is your last chance to purchase it with the License Protection and receive free upgrades for the life of the product.
We are working harder than ever to bring more great features to your favorite plugin. We are super excited about the things we’re working on and can’t wait to share them with you over the next few months!
We will be making upgrade protection changes for new GhostDoc Pro users in order to align GhostDoc Pro maintenance with all other SubMain’s products.
Starting January 1, 2015, for new license purchases only we are retiring the lifetime License Protection option for GhostDoc Pro and replacing it with annual Software Assurance subscription offering.
If you have been thinking buying new license(s) or adding more licenses, now is the time! Purchase GhostDoc Pro with License Protection by December 31, 2014 and save big on future GhostDoc Pro upgrades!
What is Software Assurance subscription?
SubMain customers can purchase 12 months of Software Assurance subscription with the purchase of any new license. Upgrade protection includes access to all major and minor version upgrades for 12 months from the date of purchase at no additional charge.

Upgrade Protection Timeline 2015
For example, if a new GhostDoc Pro license is purchased on May 1, 2015, upgrade protection will expire on April 30, 2016. During this time, the customer can download and install any minor version upgrades. In addition, if SubMain issues a major release of GhostDoc Pro during the subscription period, the license can be upgraded to the latest version at no additional charge. With SubMain’s Software Assurance, customers will always have access to the latest features and fixes.
For more information please see Software Assurance – Renewal / Reinstatement
Again, please note that this new upgrade protection subscription will only affect new license purchases after January 1, 2015. All existing customer licenses with License Protection and those purchased by December 31st, 2014 will be honored and free upgrades will be provided to users with License Protection for the life of the product.
Thanks again for all of your support. Keep an eye out for more new exciting releases coming very soon!
[Edit: added some frequently asked questions]
Q: How does the Software Assurance subscription work for GhostDoc Pro?
A: It works the same way it does for all other SubMain products – the initial subscription term is one year from the purchase date. It is renewed at the end of the subscription for another year unless you choose to discontinue the subscription. If your license purchase has not included subscription auto-renewal, you need to renew your subscription manually in order to keep it current.
For more information please see Software Assurance – Renewal / Reinstatement
Q: I have purchased GhostDoc Pro without the License Protection. Can I add it now?
A: No, License Protection is not something that can be added after the license purchase.
Q: How long do I get updates if I don’t purchase Software Assurance subscription?
A: With a new license purchase you get 90 days of free product updates if you have not purchased the Software Assurance subscription option.
Q: With License Protection do I get all future features or not?
A: Customers who purchased GhostDoc Pro with License Protection before it is replaced with the Software Assurance subscription get exactly the same features as the users with subscription. Think of the soon to be retired License Protection as a prepaid lifetime subscription.

We are happy to introduce the release of GhostDoc v4.7, a version that is greatly influenced by feedback from our users. It extends Visual Studio 2013 support and introduces an Enterprise version, Help Configurations, Help Content Targeting, embedding images into help markup, hyperlinking to base .NET Framework classes, abbreviation expansion, Non-breaking Words list, and much more:
• Visual Studio 2013 support
• Introduced GhostDoc Enterprise
• (Pro) New Help Configuration feature – save custom configured help generation settings and switch between them easily
• (Pro) Help Content Targeting – ability to create a (partial) ‘filtered’ help file based on attributes by using in XML Comment and tag filtering in Help Configuration. If you need to have different help content for public API users, testers, internal documentation, etc, you can do that now!
• New Abbreviations dictionary enables expanding abbreviations to full words (for example, ‘args’ -> ‘arguments’)
• New Non-breaking Words list to preserve splitting when comment template is generated (for example, ‘CheckBox’ or ‘ListView’)
• (Pro) Embed/reference images into the Help markup
• (Pro) Option to skip documenting private/internal/protected members with Document File/Type batch commands
• (Pro) .NET Framework classes are now hyperlinked to corresponding Microsoft reference page for additional information
• (Ent) Customize Comment Preview
• (Ent) Customize help layout and template
For the complete list of v4.7 changes see What’s New in GhostDoc and GhostDoc Pro v4.7
GhostDoc Enterprise
We have identified a clear need for a new kind of GhostDoc product, specifically suitable for enterprises and customers who need advanced configuration features for the help file output.
The GhostDoc Enterprise version that we are officially introducing today offers silent deployment options, full customization of the template and layout of the Comment Preview and Help Files. The Enterprise license customers are also eligible for the on-premises Enterprise Licensing Server option.
For edition comparison please see this page –

Help Configuration and Help Content Targeting
The new Help Configuration feature enables you to create ready-to-use help generation profiles. You can easily switch between profiles, depending on what kind of help you are producing. Also you can define attributes with in XML Comment and use tag filtering in Help Configuration to target Help content to your specific audience.
Help Configuration profiles include settings for:
• Output format
• Scope
• Projects to include
• Header and footer
• New tag filtering for help content targeting
How do I try it?
Download v4.7 at
Feedback is what empowers us!
Let us know what you think of the new version here –

PrettyCode.Print and StudioTools no longer fit our strategic corporate direction to deliver best code quality tools on the market.
Discontinuing these products enables us to reinvest the efforts into our flagship products and offer even better code quality tools. We will be releasing new greatly enhanced versions of our existing products as well as new products over the next six months. For a sneak peek of what’s coming please see our Product Feedback board.
PrettyCode.Print for VB6/VBA and PrettyCode.Print for .NET will no longer be further developed. Both products will be converted into a free download in their current state. While we will continue to offer technical support for PrettyCode.Print products for six more month from now, there will be no enhancements, improvements nor bug fixes for these two products. You can download PrettyCode.Print for VB6/VBA and PrettyCode.Print for .NET in the Community Downloads section.
CodeIt.Once is retired and no longer available for download. We encourage you to learn about our CodeIt.Right product which offers automated refactorings and proactively finds the opportunities for refactoring and improving your code.
StudioTools is retired and will no longer be offered for download or supported.
CodeSpell will not be available for purchase for approximately 6 months. The CodeSpell engine is being rewritten to allow greater flexibility and better features, after which, the code spelling feature will be offered as part of our GhostDoc Pro product. All customers that purchased CodeSpell from SubMain (after March 9th, 2010) will be offered, at no charge, the equivalent number of licenses for GhostDoc Pro once the code spelling feature is released.
We sincerely appreciate your continued support and look forward to working with you in the future.

Code Quality developer tools is the direction we’ve been following since the introduction of CodeIt.Right and we are taking this commitment to the next level in 2010 with two new products and new features for our existing products. One of the new products to be released in 2010 will assist in unit testing, code coverage and test code profiling; the second new product will be complementary to CodeIt.Right. All three products together will comprise our new Code Quality Suite. Additionally, we will continue to keep up with the Visual Studio 2010 release schedule and have all of our products 2010 compatible when VS2010 is RTM.
Here is what we are planning for 2010:
• New product!
◦ Coming March 2010: we are adding to our product line by offering a unit test runner and code coverage product.
• New product!
◦ Project Anelare (code name) – we will provide details on this project as we get closer to a public preview. At this point we can share that this will be product complementary to CodeIt.Right – together they will encompass our code quality package.
• VS2010 support
◦ For all products – most of our products are compatible with VS2010 RC, and we will be VS2010 RTM compatible by the time it RTMs.
• CodeIt.Right
◦ Optimized rule library performance: the new version will be released the first week in March!
◦ Community Rule Valuation & Review: we are pioneering “social” in code analysis by enabling the community to rate rules and provide feedback; as well as leverage the community feedback, best uses and best practices for each rule.
◦ NEW Rules – with emphasis on security, FxCop/StyleCop parity, SharePoint, WPF & Silverlight rules.
◦ (EE) Trend Analysis: monitor code quality improvements over time.
◦ (EE) Integration with manual code review tools.
◦ Global Suppressions: adding support for GlobalSuppressions and extending syntax of the SuppressMessage attribute for more flexible in-code exclusions.
◦ Multi-select in the violations list.
◦ Copy Rule feature: clone and change rule instance configuration
◦ Command line enhancements: open command line/build violations output in Visual Studio for correction
◦ Annotation: for excludes and corrections
◦ XAML support: enables building Silverlight and WPF specific rules
◦ Profile Wizard: quick start no-brainer user/project profile based on the project type, importance, community valuation, favorite food, etc
• GhostDoc
◦ We are currently prioritizing the feature set for the new version of GhostDoc. If you have a feature request that you have not submitted yet, share them with us in the GhostDoc forum.
Stay tuned to our blog for more details about our progress!

Added RuleID to rule help documentation.

Earlier this week I attended VSX DevCon on Microsoft campus where I learned quite a bit about the changes coming to the upcoming version of Visual Studio (Dev10) and its Extensibility Model. This was very important for us to be abreast of the future Microsoft releases and stay ahead of the game with our products. I twittered some of the event.
It was fun conference too. I talked to many MS VSX folks, first time saw Rico Mariani talking (he is excellent!), met new people and old friends including Richard Hundhausen, Roy Osherove and Eli Lopian of TypeMock, discussed templating options (something to be included into future version of CodeIt.Right) with T4 guys.
While fun, food and snacks were a little different this time around. There was another conference across the hall and MS catering had naturally setup separate snack tables. Which I didn’t really pay much attention to… So when VSX tables got low on snacks I was yelled at by a catering lady for trying to get something from the other conference table. I had a weird feeling of a kid being caught stealing cookies…
Well, the story next day made me feel a little better about my “cookie incident” It was mentioned already by Ken Levy via Twitter and Ted Neward on his blog – Steve Ballmer got scolded by catering for taking a cookie without VSIP badge… At least I wasn’t the only one who got confused and yelled at
Back to the business now. Me being out of the office for few days didn’t affect the progress on v1.1 release of CodeIt.Right. In fact, it is finished, tested and will are releasing it tomorrow.
Stay tuned!

by Serge Baranovsky
CodeIt.Right is finally finished after about 3 years in the making. That’s right, CodeIt.Right is Released! It is out in its all new shiny package
I would like to make a pause here and extend my deepest gratitude towards everyone who helped make this release possible. From the SubMain development team, the advisory board members, to everyone who participated in the community and contributed feedback over the year since we released the first public beta.
CodeIt.Right, my 7 years long dream come true. The tool it out! Cheers! (I truly believe that code analysis coupled with automatic refactoring will change the way .NET developer teams and solo developers work!)
With the touchy-feely stuff out of the way, let’s get back to the actual product, shall we?
If you are new to CodeIt.Right:
What’s next?
This not a road map per se, just highlights of where we are heading with CodeIt.Right over the next months:

• We will keep publishing new rules as they are developed and will push them to you using the Auto-Update feature

• We will publish more tutorials and how tos on using the product and developing your own custom rules using CodeIt.Right SDK

• Create community section over at and allow custom developed rules shared with other users

• Version v1.1 is coming in 4-6 weeks – .NET 3.5 syntax, merging profiles, Pivot View improvements, generating team guidelines document template from profile, and of course, more rules!

• Version v2.0 is preliminarily scheduled for summer 2008 and will introduce VSTS integration and manual refactorings (we will merge CodeIt.Once into CodeIt.Right)
So, don’t wait, go ahead, download CodeIt.Right – – play with it, explore the rules included in the box, get out of the box and try developing your own custom rules, share them, ask questions, tell us what you think!

by Serge Baranovsky
New set of CodeIt.Right rules:

• Avoid unsealed attributes (Performance)

• COM visible types should be creatable (Interoperability)

• Pointers should not be visible (Security)

• Remove empty finalizers (Performance)
(All of the new rules above offer AutoCorrect options)
This set of rules is distributed using the Rule AutoUpdate feature added into the Beta 2 of CodeIt.Right. Auto Update triggers in 15 minutes after you start Visual Studio. If you turned the feature off you can manually start the update wizard from the CodeIt.Right/Help & Support/Update Rules menu.
Another set of rules will be distributed with new build of CodeIt.Right next week as some of them require updated version of the SDK.
Please leave your feedback how much you like/dislike the AutoUpdate feature, your suggestions – in the CodeIt.Right forum
(Note: if you skip the custom profile update step in the Rules Update Wizard, you still can add new rules to your custom profile(s) using the Add Rule button in the Profile Editor – you will find recent rules by sorting the date column)
For more information on CodeIt.Right, getting started presentation, support and feedback see Beta 1 announce post.

New Beta build (1.0.07100) of CodeIt.Right is available – added new rule category ‘Exception Handling’ with 3 new rules, revamped toolbar and fixed whole a lot of bugs reported in the last month. Try it out
(Note for current Beta users: to see the new Exception Handling rules you will need to switch back to the built-in profile or add them to your custom profile(s))
Next stop – new version of PrettyCode.Print for .NET to be released late this month.
Changes in build 1.0.07100:

• REMOVED: “Stop Analysis” button in toolbar and menu

• CHANGED: “Start Analysis” toolbar button – replaced icon with text

• CHANGED: Moved built-in profile into a separate resource DLL – SubMain.CodeItRight.Rules.Default.dll 

• FIXED: Drawing issue for marker box 

• ADDED: New rule category – “Exception Handling”

• ADDED: New rule “DoNotRaiseSpecifiedExceptionTypes” with correct action “Change type of exception to specified type”

• ADDED: New rule “DoNotCatchSpecifiedExceptionTypes” with correct action “Change type of exception to specified type”

• ADDED: New rule “DoNotHandleNonCLSCompliantExceptions” with correct action “Catch specific exception using parameter catch block”

• UPDATED: Help file – with new rules and category information

• other fixes
With over 2 dozen bugs fixed (not listed individually above) and 3 new exception handling rules this is significant and more stable Beta build.
Download build 1.0.07100 here –
For more information on CodeIt.Right, getting started presentation, support and feedback see Beta announce post.

by Serge Baranovsky
Mike Gunderloy posted a nice review of CodeIt.Once on Larkware. Mike writes:
“These [refactorings] are all hooked into the user interface though the main menus, the shortcut menus in the code editor, and as appropriate in other spots – for example, you can get to Reorder Parameters from the Object Browser shortcut menu when you have a method selected. One nice thing about CodeIt.Once compared to other refactoring products that I’ve tried is that the learning curve is very gentle. Every refactoring uses a wizard mode by default, where a step-by-step dialog box walks you through what the refactoring does and helps you make the appropriate choices along the way. For example, the Extract Method wizard offers advice on choosing good method names and reminds you to use Pascal casing on the name, and then provides a user interface to let you name and order parameters. When you’re a little more comfortable, you can suppress the introductory screens on the wizards. As a final step, you can opt to dispense with the wizards entirely (on a refactoring by refactoring basis) and operate in expert mode, where invoking a refactoring opens a dialog box that prompts you for just the necessary information.”
Thanks for the thorough review, Mike!

For years, I can remember fighting the good fight for unit testing. When I started that fight, I understood a simple premise. We, as programmers, automate things. So, why not automate testing?
Of all things, a grad school course in software engineering introduced me to the concept back in 2005. It hooked me immediately, and I began applying the lessons to my work at the time. A few years and a new job later, I came to a group that had not yet discovered the wonders of automated testing. No worries, I figured, I can introduce the concept!
Except, it turns out that people stuck in their ways kind of like those ways. Imagine my surprise to discover that people turned up their nose at the practice. Over the course of time, I learned to plead my case, both in technical and business terms. But it often felt like wading upstream against a fast moving current.
Years later, I have fought that fight over and over again. In fact, I’ve produced training materials, courses, videos, blog posts, and books on the subject. I’ve brought people around to see the benefits and then subsequently realize those benefits following adoption. This has brought me satisfaction.
But I don’t do this in a vacuum. The industry as a whole has followed the same trajectory, using the same logic. I count myself just another advocate among a euphony of voices. And so our profession has generally come to accept unit testing as a vital tool.
Widespread Acceptance of Automated Regression Tests
In fact, I might go so far as to call acceptance and adoption quite widespread. This figure only increases if you include shops that totally mean to and will definitely get around to it like sometime in the next six months or something. In other words, if you count both shops that have adopted the practice and shops that feel as though they should, acceptance figures certainly span a plurality.
Major enterprises bring me in to help them teach their developers to do it. Still, other companies consult and ask questions about it. Just about everyone wants to understand how to realize the unit testing value proposition of higher quality, more stability, and fewer bugs.
This takes a simple form. We talk about unit testing and other forms of testing, and sometimes this may blur the lines. But let’s get specific here. A holistic testing strategy includes tests at a variety of granularities. These comprise what some call “the test pyramid.” Unit tests address individual components (e.g. classes), while service tests drive at the way the components of your application work together. GUI tests, the least granular of all, exercise the whole thing.
Taken together, these comprise your regression test suite. It stands against the category of bugs known as “regressions,” or defects where something that used to work stops working. For a parallel example in the “real world” think of the warning lights on your car’s dashboard. “Low battery” light comes on because the battery, which used to work, has stopped working.
Benefits of Automated Regression Test Suites
Why do this? What benefits to automated regression test suites provide? Well, let’s take a look at some.
• Repeatability and accuracy. A human running tests over and over again may produce slight variances in the tests. A machine, not so much.
• Speed. As with anything, automation produces a significant speedup over manual execution.
• Fast feedback. The automated test suite can tell you much more quickly if you have broken something.
• Morale. The fewer times a QA department comes back with “you broke this thing,” the fewer opportunities for contentiousness.
I should also mention, as a brief aside, that I don’t consider automated test suites to be acceptable substitutes for manual testing. Rather, I believe the two efforts should work in complementary fashion. If the automated test suite executes the humdrum tests in the codebase, it frees QA folks up to perform intelligent, exploratory testing. As Uncle Bob once famously said, “it’s wrong to turn humans into machines. If you can write a script for a test procedure, then you can write a program to execute that procedure.”
Automating Code Review
None of this probably comes as much of a shock to you. If you go out and read tech blogs, you’ve no doubt encountered the widespread opinion that people should automate regression test suites. In fact, you probably share that opinion. So don’t you wonder why we don’t more frequently apply that logic to other concerns?
Take code review, for instance. Most organizations do this in entirely manual fashion outside of, perhaps, a so-called “linting” tool. They mandate automated test coverage and then content themselves with sicking their developers on one another in meetings to gripe over tabs, spaces, and camel casing.
Why not approach code review the same way? Why not automate the aspects of it that lend themselves to automation, while saving human intervention for more conceptual matters?
In a study by Steve McConnell and referenced in this blog post, “formal code inspections” produced better results for preemptively finding bugs than even automated regression tests. So it stands to reason that we should invest in code review in the same ways that we invest in regression testing. And I don’t mean simply time spent, but in driving forward with automation and efficiency.
Consider the benefits I listed above for automated tests, and look how they apply to automated code review.
• Repeatability and accuracy. Humans will miss instances of substandard code if they feel tired — machines won’t.
• Speed. Do you want your code review to take seconds or in hours/days.
• Fast feedback. Because of the increased speed of the review, the reviewee gets the results immediately after writing the code, for better learning.
• Morale. The exact same reasoning applies here. Having a machine point out your mistakes can save contentiousness.
I think that we’ll see a similar trajectory to automating code review that we did with automating test suites. And, what’s more, I think that automated code review will gain steam a lot more quickly and with less resistance. After all, automating QA activities blazed a trail.
I believe the biggest barrier to adoption, in this case, is the lack of awareness. People may not believe automating code review is possible. But I assure you, you can do it. So keep an eye out for ways to automate this important practice, and get in ahead of the adoption curve.
Tools at your disposal
SubMain offers CodeIt.Right that easily integrates into Visual Studio for flexible and intuitive automated code review solution that works real-time, on demand, at the source control check-in or as part of your build.

Microsoft Bot Framework: Forms Dialog From JSON Schema

I’ve already written few posts about Microsoft Bot Framework. If you missed them, you can check out my posts here. Today, we are going deeper and reviewing another really good feature of Form Dialog. It involves the possibility to create a form using a JObject. As before, Form Dialog is going to create a form and allow our bot to ask field-by-field until it completes the form but instead of using a static C# class to define our form we are going to provide a JSON Schema.
In order to utilize this feature, you need to ensure that you add the NuGet project Microsoft.Bot.Builder.FormFlow.Json to your project. This defines the new namespace Microsoft.Bot.Builder.FormFlow.Json that contains the code to allow using JSON Schema for FormFlow.
Creating the JSON Schema
Now we need to define our form. This time, we are going to create a JSON file to do it. In the References property, we are going to define all the dependencies for our form. In the same way, under the Imports property, we define all the namespaces to include. Another important property for our form is the OnCompletion property. Here, we are going to put a C# script to execute after our bot completes to fulfill the form. Then, we have the properties field where we are going to place the fields we want our bot to ask the customer.
Once we have defined the schema file, we need to create a method to return an object which implements the IForm interface as we did last time. As you can see in the code below, we need to use the FormBuilderJson object and we just pass the schema in the constructor.
As you can see, this feature gives us the flexibility to define custom forms and the availability to change it dynamically. Thinking out loud, I can imagine a use case where we want to provide to our customers with a way to model their forms so we can define the form as a JSON Schema and provide them with an admin screen to change it.

For mode details about the Microsoft Bot Framework, you can use this link.
If you found this post useful, please don’t forget to press the like button and share it. If you are in doubt, don’t hesitate to ask a question and, as always, thank you for reading.

The One Thing Every Company Can Do to Reduce Technical Debt

Reduce testing time & get feedback faster through automation. Read the Benefits of Parallel Testing, brought to you in partnership with Sauce Labs.
Editorial Note: I originally wrote this post for the NDepend blog. You can check out the original here, at their site. While you’re there, take a look at the technical debt functionality in the latest version of NDepend.
The idea of technical debt has become ubiquitous in our industry. It started as a metaphor to help business stakeholders understand the compounding cost of shortcuts in the code. Then, from there, it grew to define perhaps the foundation of tradeoffs in the tech world.
You’d find yourself hard pressed, these days, to find a software shop that has never heard of tech debt. It seems that just about everyone can talk in the abstract about dragons looming in their code, portending an eventual reckoning. “We need to do something about our tech debt,” has become the rallying cry for “we’re running before we walk.”

As with its fiscal counterpart, when all other factors are equal, having less tech debt is better than having more. Technical debt creates a drag on the pace of delivering new features until someone ‘repays’ it. And so shops constantly grapple with the question, “how can we reduce our tech debt?”
I could easily write a post where I listed the 3 or 5 or 13 or whatever ways to reduce tech debt. First, I’d tell you to reduce problematic coupling. Then, I’d tell you to stop it with the global variables. You get the idea.
But today, I want to do something a bit different. I want to talk about the one thing that every company can do to reduce tech debt. I consider it to be sort of a step zero.
The Tale of the Absent Product Owner
But before I go for the big reveal, I’d like to veer into a bit of a parable. Don’t worry — I’ll try to make this at least nominally entertaining via the art of the narrative.
Once upon a time, during my travels as an IT management consultant, I encountered a team looking to improve its throughput. Okay, that might describe every team I work with, so I’ll get a bit more specific. This team wanted to improve throughput but found itself stuck when it couldn’t get answers to clarifying questions about the business quickly enough. “Should we consider user submitting an invalid form an error state, or do we expect that?” Crickets.
This team had “gone Agile” at some point. From there, they worked their way into a comfort zone with this new reality, adapting to the roles and ceremonies of a Scrum team. One such role, the product owner, serves to represent the business to the team. But with this particular team, the product owner seemed in a constant state of attending meetings in other locations, away from the team.
How did they fix this? Meetings? Interventions from on high? Pleading? Nope. They went more low tech and simpler. They simply started writing on a big white board, “number of hours with access to the product owner today” followed by the count. The whole department could then see that this team had access to its product owner for 0 or 1 hours. With no other interventions whatsoever, the number increased significantly within a matter of weeks.
Sunlight Is the Best Disinfectant
I had no shortage of management consulting cliches from which to pick for this section’s header, but I went with words of Louis Brandeis. I just like the ring of it.
Sunlight makes the best disinfectant. This colorfully illustrates the notion that simply illuminating problems can make them go away. In this case, calling the product owner’s (and everyone else’s) attention to his rampant absenteeism inspired him to address the problem without further prompting. You have probably experienced a phenomenon like this in your personal life. For instance, perhaps just the act of weighing yourself every day makes you lose a bit of weight, simply by virtue of putting the effects of your eating choices in the front of your mind.
Generally speaking, focusing the spotlight on something can, in and of itself, alter the thing you’re looking at. We might borrow from physics and think of this as “observer effect.” It packs a powerful punch as a recommendation because of both its inevitability and potential healing power. If you want to improve something with any efficacy, you must first start measuring it so that you have a benchmark.
Thus shining a light on something represents a possible improvement strategy in and of itself, as well as a first step ahead of other interventions. It is for this reason that I think of measurement as “step zero.”
Back to the Topic of Tech Debt
Simply put, you can best fight tech debt by visibly measuring it. Do you need to get the exact number of hours or dollars spent on employee labor completely correct? No, of course not. Don’t get too hung up on that.
Just go put a figure to it so that you can watch that figure change as you do your work. I cannot overstate the importance of this. If you wring your hands over the particulars, your tech debt will remain forever unquantified and thus abstract. If you say, “we have a lot of tech debt,” business stakeholders will answer, “so does every place I’ve ever worked — whatever.”
But if you write up on the big board that you have 225 days of tech debt when yesterday’s figure had only 200, people are going to notice. Discussions will start. Suddenly, the tech debt becomes everyone’s problem. And, once that happens, watch as the number starts to decrease as if of its own volition.
The Agile Zone is brought to you in partnership with Sauce Labs. Discover how to optimize your DevOps workflows with our cloud-based automated testing infrastructure.

Test Environments, The Right Way

Download “The DevOps Journey – From Waterfall to Continuous Delivery” to learn learn about the importance of integrating automated testing into the DevOps workflow, brought to you in partnership with Sauce Labs.
Testing Environments, The Traditional Way
Your team probably has a small number of environments that everyone shares during your development sprints. Depending on budget, team size, or frequency of releases, these may vary, but it will roughly show the following picture:
• Dev – Allows developers to test features beyond their local environment before handing them to the QA team for them to test.
• Test – Where the QA team performs their functional testing, usually within the scope of a release.
• UAT– An environment that mimics (or at least tries to mimic) production. Usually used to try to reproduce non-functional issues seen in production or for performing performance testing.
At least the Dev and Test environments are quite different from an infrastructure perspective to production, as they’re only intended for functional testing.
Why Is It Done This Way?
Historically, computing resources were expensive, but even now, in the cloud era, there’s still a number of factors that still encourage this approach. Let’s dive into what the drivers for this could be.
Environment Setup and Deployment Pain
Environment creation can be divided into two broad areas:
• Infrastructure setup:
◦ On-premise – Hardware needs to be bought, host OS be setup, VMs created, etc.
◦ Cloud – Compute instances need to be created, IAAS security specifics configured, etc.
◦ Both – Ensure only team can access environments configuring SSH, FW, DNS, proxies, VPN, etc.
• Application setup (this applies both to on-premise and cloud variants):
◦ Install and configure third party services: DBs, caches, middleware, etc.
◦ Build your application artifacts, usually using a CI tool.
◦ Deploy your application artifacts, using scripts or a CM tool (Puppet/Chef/Ansible/etc).
All team members need to be able to access the environment at any time and from anywhere, or at least during working hours and from the workplace, without any extra effort on their side.
Ease of Management
Ideally, the fewer resources dedicated to test environment management, the better. Usually, this is a task handled by the operations team. Doing the above manually is a daunting task. If you automate it, you still need to write, maintain, and evolve whatever solution you used for automation.
Some Fresh Stats
I recently found a report with some very interesting insights on the topic: World Quality Report 2016-17 (pages 45-50) by Capgemini.

Test environment types

Here’s my take on these results.
From the first figure, we can see that 28% of testing still happens the traditional way, as described at the beginning of this post. The rest happens on temporary test environments, either in the cloud, virtualized, or non-cloud. Maybe this a-la-carte tendency is budget related, maybe it comes with an agility perspective in mind. At least, it reflects a tendency to allow creating testing environments in a more dynamic way. It would be interesting to see what the environment creation process is, and the overhead it adds in terms of infrastructure and application setup, but at least feel like steps in the right direction.
The second one is the one that makes me draw more interesting conclusions, though. All the sections in the graph except the 4th one (starting from the top) make me think that almost 50% of people being surveyed saw that handling of testing environments was an issue to them, in one way or the other: maintenance of multiple versions, ability to book or manage, lack of visibility of availability, availability of the right environment at the right time, and inability to manage excess needs. As I see this, it implies that the transition to under-demand environments doesn’t seem to be done properly, and probably there are infrastructure and/or budget restrictions affecting it too. I would also infer that there’s still a heavy reliance on the operations team to create the testing environments, and that probably the test team deadlines get affected by these limitations. In any case, there seems to be quite some room for improvement.
Environments, The Docker Way
One of the key benefits of Docker is the reproducibility of environments. This means that all environments created with the same configuration will behave the same way, no matter where they are created, no matter how many times they are recreated.
With Docker Compose, you can combine the creation of different services as part of your environment. These services can be application services (your business logic) or third party ones (DBs, caches, middleware, etc). This way we avoid the manual setup and configuration step, and the need of error-prone deployment scripts.
But we still need to be able to bundle our application artifacts as part of our Docker images. This implies checking out our project from our Git (hopefully!) repository, building the artifact, and creating an image bundling said artifact.
Unfortunately, not every team member has the skills or expertise to work with Git, Docker, and Docker Compose, and whichever the specific project technologies are (needed in order to build the application artifacts to be bundled with Docker).
How Do We Solve This?
1. Remove the Docker learning curve.
2. Reduce the required Git skills.
3. Provide the ability to build your Docker images in order to launch your application services, and launch third-party services your application depends on.
4. Ensure our environment management is simple and can be done everywhere, by everyone.
This is what Sandbox can do for you! You will get an easy to use UI, providing
• Full Git integration – Launch an environment from a specific branch/commit with the click of a button! In fact, launch multiple environments simultaneously, allowing you to compare the behavior of different branches side by side.
• Comprehensive environment management – Lifecycle (start/stop/restart) actions for your environments and/or services that compose them, and access to your services logs.
• Point and click editor – Simple way to define the different services that conform your application.
Wrapping Up
After many years in software development, you get acquainted with the issues around testing environments. Even though, as a developer you might not be involved with the testing process directly, it’s always part of the release process. As such, you get affected by these issues too, especially when fixing bugs or working under tight deadlines. I’m sure I’m not the only one who has ever thought about ways of improving this area!
How is your team or company dealing with issues around testing environments? Does the discussion in this post sound familiar? I’d be interested to know your experiences and thoughts on the topic!
I’d like to make it clear that I’m not defending this approach as a silver bullet, as it obviously has its limitations. For the type of testing you would do in a UAT environment, be it replicating infrastructure related bugs, testing NFRs, or for performance testing, this is clearly a solution that would not work, no matter if you are using Sandbox or any other solution that relies on a local environment. On the other hand, I think it is at least valid for functional testing, which in my experience, adds up to a big part of the overall testing efforts in a release.
It may also not be valid if your production deployments are cloud-based and you use some kind of service from your provider to which you have no access unless your running from its infrastructure, or if you have network related limitations (i.e. relying on an external service to which access is only allowed from a specific network, to where you have no access).
Finally, it obviously will depend on the type of application you’re testing. A massive application with lots of services (be it application or third party) will probably struggle to run on an average box, but all projects I’ve been involved in did definitely run on my laptop during development phase, so in those, this would have been a perfectly valid approach for the QA team to follow.
Discover how to optimize your DevOps workflows with our cloud-based automated testing infrastructure, brought to you in partnership with Sauce Labs.
Opinions expressed by DZone contributors are their own.

Generate Your Next Xamarin.Forms Grid: It’s as Easy as Drag and Drop

Launching an app doesn’t need to be daunting. Whether you’re just getting started or need a refresher on mobile app testing best practices, this guide is your resource! Brought to you in partnership with Perfecto.
If you’ve used other development platforms, you’re probably used to using a native toolbox that lets you drag and drop controls or whatever you need to your code. Now you’ve decided to use Xamarin.Forms… and there’s no toolbox.
That’s where the Ultimate UI Controls for Xamarin comes in and gives you the Infragistics Toolbox: the world’s first NuGet powered toolbox for Xamarin.Forms. With this toolbox, you can drag and drop the component or control that you want to use, and the XAML code will be generated for you automatically.
To gets started, we’ll open up the app that we created in a previous blog post. Once we have the solution open, right-click the project and select Manage NuGet Packages for Solution.

On the Browse tab of the NuGet package manager, search for XF.DataGrid. Select the Infragistics.XF.DataGrid package, and then enable the package for the project by marking the checkmarks. Click Install to add the package to the solution.

Now that we’ve added the package, let’s open the toolbox and see what we have. You can open the toolbox by clicking View -> Other Windows -> Infragistics Toolbox.

With the packages that we have installed, we should be able to see different layouts, views, and cells in the toolbox. For now, we’ll focus specifically on using a grid.

Using the solution that we created in the previous blog, we already have a View named GridPage.xaml. Let’s open this page so that we can work with the toolbox. With the page open, place the cursor on a line and then double-click Grid from the toolbox.

A blank grid will automatically get created for us. At this point, the grid isn’t very interesting. We would still need to customize it based on how we want it to look, and how the data is supposed to appear. Let’s delete this grid, and then add it a different way. With the cursor on a line, hold the Ctrl key and then double-click the Grid layout.

Now we’re talking about some real time savings. Not only is the grid created, but the layout of the grid is more complex by offering two columns and five rows for us automatically. You use this shortcut for grids and all of the other controls in the toolbox.
Ready to see how you can save time creating layouts and views with the Infragistics Toolbox? Download a trial of the UI Controls for Xamarin to get started. We also have videos and lessons to help you out.
Keep up with the latest DevTest Jargon with the latest Mobile DevTest Dictionary. Brought to you in partnership with Perfecto.

Design a Pre-Formatted Text Box for Phone Numbers and Credit Cards Design a Pre-Formatted Text Box for Phone Numbers and Credit Cards

Hurry…learn something…quick!
This is part of a series of short tutorials about specific elements, components, or interactions. We’ll cover the UX, the UI, and the construction inside of Sketch. Plus, there’s a freebie for you at the end!

What are we designing?
A pre-formatted text box for things like phone numbers and credit cards.
When are they used?
To collect data that requires high levels of precision, or when you want to make sure you’re sending the database clean records in a consistent format.
Why do they work?
Never expect the user to follow instructions. Instead, enforce formatting within the control itself. It’s a lot harder for users to screw up, and the data you get will always look beautiful. 😍
Phone Numbers: Users choose their country from the dropdown. The field uses masked formatting to show the expected appearance of the number. As they type, the field will automatically format their entry with the appropriate parentheses, hyphens, and spaces.

Credit Cards: Users begin typing their credit card number, and the system starts validating after two digits (validation rules here). When a format is recognized, the associated credit card symbol will appear on the right. It’s best practice to mask the user’s input after the field loses focus so that the wandering eyes of strangers can’t see it and go on an Apple shopping spree.

What’s the UI recipe?
A pre-formatted text box is pretty simple to build with Sketch’s nested symbols. There are a couple tricks you can use so they’re ⚡️ fast to design with:
• Icon: This should be on the left for phone numbers, and on the right for credit cards. Make your flag and payment icons symbols so that you can quickly swap them in the inspector panel. I couldn’t be bothered to create hundreds of flags myself, so I use the ones from the Nucleo Icon Set.
• Type: Use a text symbol so that you can swap the text style to show different states of the text box (placeholder, default, etc).
• Container: Create different container symbols so that you can swap in the inspector panel to show various states (default, focused, error).

Sorry, I couldn’t find a flag with 51 stars on it.

These are the nested symbols that make changing state a breeze.
Try it yourself!

Pre-formatted text boxes aren’t particularly difficult to design, but there are some tricks you can use (like nested symbols and overrides) to make your life a little easier during the process.
Before you leave, don’t forget to:
• Grab the UX Power Tools design system to use this component and tons of others just like it.
• Share this tiny tutorial with your friends on social media!

Hurry…learn something…quick!
This is part of a series of short tutorials about specific elements, components, or interactions. We’ll cover the UX, the UI, and the construction inside of Sketch. Plus, there’s a freebie for you at the end!

What are we designing?
A pre-formatted text box for things like phone numbers and credit cards.
When are they used?
To collect data that requires high levels of precision, or when you want to make sure you’re sending the database clean records in a consistent format.
Why do they work?
Never expect the user to follow instructions. Instead, enforce formatting within the control itself. It’s a lot harder for users to screw up, and the data you get will always look beautiful. 😍
Phone Numbers: Users choose their country from the dropdown. The field uses masked formatting to show the expected appearance of the number. As they type, the field will automatically format their entry with the appropriate parentheses, hyphens, and spaces.

Credit Cards: Users begin typing their credit card number, and the system starts validating after two digits (validation rules here). When a format is recognized, the associated credit card symbol will appear on the right. It’s best practice to mask the user’s input after the field loses focus so that the wandering eyes of strangers can’t see it and go on an Apple shopping spree.

What’s the UI recipe?
A pre-formatted text box is pretty simple to build with Sketch’s nested symbols. There are a couple tricks you can use so they’re ⚡️ fast to design with:
• Icon: This should be on the left for phone numbers, and on the right for credit cards. Make your flag and payment icons symbols so that you can quickly swap them in the inspector panel. I couldn’t be bothered to create hundreds of flags myself, so I use the ones from the Nucleo Icon Set.
• Type: Use a text symbol so that you can swap the text style to show different states of the text box (placeholder, default, etc).
• Container: Create different container symbols so that you can swap in the inspector panel to show various states (default, focused, error).

Sorry, I couldn’t find a flag with 51 stars on it.

These are the nested symbols that make changing state a breeze.
Try it yourself!

Pre-formatted text boxes aren’t particularly difficult to design, but there are some tricks you can use (like nested symbols and overrides) to make your life a little easier during the process.
Before you leave, don’t forget to:
• Grab the UX Power Tools design system to use this component and tons of others just like it.
• Share this tiny tutorial with your friends on social media!

6 Reasons to Version Control Your Database

For most application developers, it’s unthinkable to work without version control. The benefits of tracking and retaining an incremental history of code changes are long understood in the world of software development. No surprise then that the overwhelming majority of respondents in our State of Database DevOps survey confirmed they’re already using this practice for their application code.

Database version control
But it was a different picture when we asked about database version control. Only 58% of those who completed the survey stated that they used version control for their database changes. In a way, it’s understandable as database version control was, for a long time, seen as unfeasible. But now that’s no longer the case, it’s time database development teams caught onto the benefits.
If you’re not already versioning your database code, here are some of the reasons why you should be and some of the ways that SQL Source Control can help.
1. Easily Share Code Changes Within Your Team
Putting database code into a version control system makes it easier to coordinate the work of the different team members who share responsibility for the database. The ability to rapidly share and manage changes makes it particularly important for teams based in different locations. With SQL Source Control, team members can work on a shared database or each use a local, dedicated copy. With features like object locking, you can avoid conflicts and more easily work, without treading on each other’s toes.
2. Gain better visibility of the development pipeline
A version control system provides an overview of what development work is going on, its progress, who’s doing it, and why. Version control maintains detailed change histories, and can often be associated with issue tracking systems. For example, SQL Source Control lets you associate database tasks with Microsoft’s Team Foundation Server work items so you can get a complete view of your workflow (as demonstrated in our recent webinar).
3. Have the Ability to Rollback or Retrieve Previous Versions of the Database
While you should always have a reliable backup strategy in place, getting a database into version control also provides an efficient mechanism for backing up the SQL code for your database. Because the history it provides is incremental, version control lets developers explore different solutions and roll back safely in the case of errors, giving you a risk-free sandbox. With SQL Source Control, it’s simple to roll back and resolve conflicts straight from the Object Explorer.
4. More Readily Demonstrate Compliance and Auditing
The change tracking provided by version control is the first step to getting your database ready for compliance, and an essential step in maintaining a robust audit trail and managing risk. Compliance auditors will require an organization to account for all changes to a database, and detail all those with access to it. With SQL Source Control, you can look through the full revision history of a database or database object and see exactly who made the changes, when they made them, and why.
5. Put the Foundations in Place for Database Automation
Having a single version of truth for your database code simplifies change management. Complex processes become more automatable and repeatable, and deployments much more predictable. Using code checked into SQL Source Control as the basis for the automated builds and tests run by DLM Automation means that problems are found earlier, and higher-quality code is eventually shipped and deployed.
6. Synchronize Database and Application Code Changes
Having the database in version control directly alongside the application will also integrate database changes with application code changes. You’ll always know the version of the database being deployed directly corresponds to the version of the application being deployed. This direct integration helps to ensure better coordination between teams, increase efficiency, and helps when troubleshooting issues. SQL Source Control plugs into version control systems like TFS, Git, and Subversion that are already used for storing application code changes.
While it’s true that database version control wasn’t always achievable, the availability of tools like SQL Source Control means there is now no reason why the percentage of companies and organizations versioning their database code shouldn’t be higher. If you’re one of the 42% not yet version controlling your database, maybe one of the six reasons above will change your mind.
Find out more about putting database version control in place with SQL Source Control. SQL Source Control is part of the SQL Toolbelt, a suite of essential tools to boost productivity and simplify development, testing, and deployment.
The post Six reasons to version control your database appeared first on Redgate Software.

Automating the Automation Tools at Capital One

Listening to his talk, it seems like George Parris and his team at Capital One aren’t keeping “banker’s hours.” George is a Master Software Engineer, Retail Bank DevOps at Capital One. At the All Day DevOps conference, George gave a talk, entitled Meta Infrastructure as Code: How Capital One Automates our Automation Tools with an Immutable Jenkins, describing how they automated the DevOps pipeline for their online account opening project for Capital One, a major bank in the United States. Of course, there is a lot to learn from their experience.

George started by pointing out that software development has evolved – coming a long way even in just the last few years. Developers now design, build, test, and deploy, and they no longer build out physical infrastructure – they live in the cloud. Waterfall development is rapidly being replaced by Agile, infrastructure as code, and DevOps practices.
Where we see these technologies and methodologies implemented, IT Operations teams are acting more like developers, designing how we launch our applications. At the same time, development teams are more responsible for uptime, performance, and usability. And, operations and development work within the same tribe.
George used the Capital One Online Account Opening project to discuss how they automate their automation tools – now a standard practices within their implementation methodology.

For starters, George discussed how Capital One deploys code (hint: they aren’t building new data centers). They are primarily on AWS, they use configuration management systems to install and run their applications, and they, “TEST, TEST, TEST, at all levels.” Pervasive throughout the system is immutability – that is, once created, the state of an object cannot change. As an example, if you need new server configurations, you create a new server and test it outside of production first.
They use the continuous integration/continuous delivery model, so anyone working on code can contribute to the repositories that, in turn, initiate testing. Deployments are moved away from the scheduled release pattern. George noted that, because they are a bank, regulations prevent their developers from initiating a production change. They use APIs with the product owners to automatically create tickets, and then product owners accept tickets, making the change in the production code. While this won’t apply to most environments, he brought it up to demonstrate how you can implement continuous delivery within these rules.
Within all of this is the importance of automation. George outlined their four basic principles of automation and the key aspects of each:
Principle #1 – Infrastructure as Code. They use AWS for hosting and everything is in a Cloud Formation Template, which is a way to describe your infrastructure using code. AWS now allows you to use CFTs to pass variable between stacks. Using code, every change can be tested first, and they can easily spin-up environments.
Principle #2 – Configuration as Code. This is also known as configuration management systems (they use Chef and Ansible). There are no central servers, changes are version controlled, and they use “innersourcing” for changes. For instance, if someone needs a change to a plugin, they can branch, update, and create a pull request.
Principle #3 – Immutability. Not allowing changes to servers once deployed prevents “special snowflakes” and regressions. Any changes are made in code and traverse a testing pipeline and code review before being deployed. This avoids what we all have experienced – the server that someone, who is no longer around, set up and tweaked differently than anything else and didn’t document what was done.
Principle #4 – Backup and Restore Strategy. A backup is only as good as your restore strategy. You know the rest.
George also dives into how they do continuous delivery/continuous integration in his talk, which you can watch online here.
If you missed any of the other 30-minute long presentations from All Day DevOps, they are easy to find and available free-of-charge here. Finally, be sure to register you and the rest of your team for the 2017 All Day DevOps conference here. This year’s event will offer 96 practitioner-led sessions (no vendor pitches allowed). It’s all free, online on October 24th.
• Like0

• Share


10 Tips to Writing Good Unit Tests

Download “The DevOps Journey – From Waterfall to Continuous Delivery” to learn learn about the importance of integrating automated testing into the DevOps workflow, brought to you in partnership with Sauce Labs.
Since we already got started on unit testing in the previous post, I thought we could stick with the topic and lay out some rules for writing good, maintainable unit tests. The choice was pretty arbitrary and by no means complete, but I hope it will be helpful. Let’s get started.
1. Make Them Short
Since we’re testing a single piece of functionality, delivered by a single unit of code, it makes sense that a test should be reasonably short. How short? Well, that depends on multiple factors, but usually not longer than a few lines of code.
2. Don’t Repeat Yourself
Good coding practices apply to test code in the same way they apply to production code. In my experience, one of the most commonly violated rules in unit tests is DRY. Some people even claim that unit tests should not share any code at all. That’s pure BS. Of course, you want to keep your tests as readable as possible, but copy-pasting things around is not the solution.
3. Prefer Composition Over Inheritance
Once you acknowledge the two previous points, you might feel tempted to create some sort of base class for your test that will contain commonly used code. If you do, stop right there! Such a base class works like a magnet for all sorts of unrelated shared code and grows very quickly until it takes over your project, your company, and even your dog. Protect your dog, use composition!
4. Make Them Fast
Unit tests are something you should be able to run almost all the time. For this reason, make sure to mock out external dependencies and other things that might slow your tests down. This will usually be things like databases, external systems, or file operations. At the same time, don’t overdo this – complete isolation of the unit under test is not a good solution either.
5. Make Them Deterministic
Every time I hear that somebody has a 95% working test suite and that’s it good enough to go to production, I want to both laugh and cry at the same time. For God’s (or your dog’s) sake, unit tests should be working 100% of the time. Only 100% passing tests mean that everything is okay (with the units, you need other kinds of tests as well). If your unit tests seem flaky, make sure to find the root cause and fix it as soon as possible.
6. Don’t Ignore Tests
Given points 4. and 5., it’s particularly important to mention that adding the @Ignore annotation to your test is not a way to fix your test suite. It’s a way to make your test suite even more unreliable because now it’s not protecting you from regression bugs and such.
7. Test Your Tests
Yes, you read that right, and no, I’m not crazy. I don’t mean writing tests of your tests. I mean practices like mutation testing, test-driven development, or frequent “changing random stuff” in your codebase to see if any tests fail. I also often do a mental exercise of trying to come up with such potential changes to the code that my tests would not spot.
8. Name Your Tests Well
No, shouldThrowException is not a good name for your test. Although I’m not convinced that every project should use some fancy naming conventions for the tests, I am fully convinced that you should be able to tell which part of your code is broken by barely reading the names of failed test cases.
9. One Logical Assertion Per Test
To achieve the nice goal of being able to tell what’s wrong by just reading the names of failed tests, one requires something more than just good names. A number of things that a single test check must be limited as well. Therefore, a good unit test should contain only one logical assertion i.e. check only one output/side effect of the tested method.
10. Design Your Tests
This is a meta-tip, that spans all other tips in this article and all those that I did not mention here. Treat your tests with the same care that you treat your production code with. Consider both good design principles and indicators like low coupling between the test code and production code, and code smells such as duplication, dead code and the like. Remember, that a good test suite can make your life much easier by making you feel safe when changing and refactoring your code, while a bad test suite can make you miserable, waste a ton of your time and make your code almost impossible to change.
Discover how to optimize your DevOps workflows with our cloud-based automated testing infrastructure, brought to you in partnership with Sauce Labs.
Like This Article? Read More From DZone

Why and How to Shift From Manual to Automated Testing

Download “The DevOps Journey – From Waterfall to Continuous Delivery” to learn learn about the importance of integrating automated testing into the DevOps workflow, brought to you in partnership with Sauce Labs.
What does a manual tester do when hitting an iron ceiling? Sure, he strives to shift to automation. If you browse through different blogs and forums, you’ll find plenty of tips and instructions. People recommend learning one or several testing tools for automation, read books and articles, or follow some dedicated sources. But is it enough to become a test automation engineer? Not at all.
Today, automation requires more than the ability to work with some testing tools. Coding skills are also obligatory. In 2010, 79% of job advertisements for software tester positions in the USA required programming skills. Seven years later, knowledge of coding languages for a test automation engineer is not a benefit, it is a must.
Change the Role
Traditionally, the testing team is regarded as “skilled end-users” who evaluate software with the means available for ordinary people – smartphones/tablets/computers, an Internet connection, keyboard, fingerprints, etc. But now, the role of software testing has changed.
QA specialists should be aware of how the code works and, moreover, how to write it. It resembles the procedure of reading a book- you should not be the author of the book but, for certain, you know how to write it. The relationship between developer and tester should not resemble writer and literary critic. They both should be the coauthors of one masterpiece.
To shift to automation, a manual tester should first of all focus on coding.
Select the Programming Language
According to the analysis of job postings by, the top five programming languages of 2017 are SQL (Structured Query Language), Java, Python, JavaScript, and C++. In 2016, the top list was almost the same. But there are different ranks based on certain criteria and each similar top list may differ. For example, Java is top language according to TIOBE rank (17.3%) based on search engines queries and PYPL rank (23.1%) based on Google trends. Mind this while surfing the Internet.
Besides that, some languages are not suitable for test automation – purely functional ones like Haskell or LISP. Of course, you can use them for some data testing, but while selecting the language, take into account the area of its usage. For example, JavaScript is suitable for pure web testing.
Explore the Market Trends
Nevertheless, a few lists will not help you to make the right choice of programming language. Some are in demand and others are not. To be on trend, you should either follow the latest trends or set new ones. It is better to start with the first option, as we do not have enough experience and skills at first to change the face of testing and automation as well.
What are automation trends in 2017? The tester’s and developer’s roles are expected to be merging because of DevOps (Development and Operations), Quality Engineering and CI/CD (Continuous Integration/Development). Thus, we have to study programming languages.
Focus on mobile development and testing. According to Gartner, more than 300 billion downloads of mobile applications are expected this year. If you are fond of mobile testing, be ready to use Appium, EarlGrey, Selendroid, MonkeyRunner, etc, and do not forget about programming languages of mobile products – HTML5, Objective-C, Swift, and others.
Cisco says that over 50 billion devices will be interconnected by 2020. Thus, IoT (Internet of Things) will require a new testing strategy. Specialists will test not only software but firmware and hardware as well.
API testing is becoming more and more popular. It will reduce the time needed for automation. There are no language limitations here because data transfer is in XML (Extensible Markup Language) and JSON (JavaScript Object Notation).
Practice Coding
When the field and programming language are selected, it is time to proceed with continuous practice. Reading is not enough, you should put your theoretical knowledge into practice. You will be able to understand and write good code only when you try to write and refactor it by your own.
The best way to learn to swim is to jump into the water. But do not go too deep where you cannot stand. You can start from online courses that are free or not so expensive. Such training courses will show you the fundamentals of coding. It is a rather useful practice to start with.
Create a GitHub Account
GitHub helps you to get access to latest open-source toolsets and additional information. You can explore Appium, REST Assured, SeleniumHQ, etc. Besides, if you have your own projects and code fragments, you are able to upload them to GitHub to receive a feedback. Moveover, a GitHub account can be added to your resume.
Do Not Neglect Networking
Visit conferences and meetups and network on Linkedin to get more information about coding. Communication with the experts engaged in programming and automated testing helps to understand the nuances of such a position. Do not be afraid to ask questions and share your own thoughts on different matters. Moreover, networking helps to meet potential employers.
While shifting to automation, always keep practicing. Do not miss any opportunity on your way to automation. Enjoy the process and never give up!
Discover how to optimize your DevOps workflows with our cloud-based automated testing infrastructure, brought to you in partnership with Sauce Labs.

Tips for Scaling Mobile App Development

Launching an app doesn’t need to be daunting. Whether you’re just getting started or need a refresher on mobile app testing best practices, this guide is your resource! Brought to you in partnership with Perfecto.
If you’re not able to meet the demand in your business for developing mobile apps, you’re not alone. The gap between companies needing mobile apps and the ability to deliver them is growing fast. In fact, a Gartner report, “The Key Fundamentals Required to Scale Mobile App Development,” warns that “By 2020, market demand for mobile app development services will be three times greater than internal IT organizations’ capacity to deliver them.”
A Gartner survey done on enterprise mobile app development, done in 2016, found that companies, on average have released only eight mobile apps, “with a significant number of respondents not having released any mobile apps.” The report says, “This is an indication of the nascent state of mobility in most organizations, with many organizations questioning how to start app development in terms of tools, vendors, architectures or platforms, let alone their ability to scale up to releasing 100 apps or more.”
But it doesn’t have to be that way. The Gartner report offers the following advice for scaling mobile app development efforts at your organization.
Prioritize App Development So Quality Is Not Sacrificed for Speed
The report notes that apps are typically built on a first-come-first-served basis. That means that the most important apps often take a back seat because they were late to the queue, even though they should get top priority. The report says, “This lack of value-driven prioritization leads to inefficient use of IT resources and degradation in the quality of apps delivered.”
Businesses should factor in two key pieces about each app to be developed: Whether it will be easy or hard to build, and how high or low an impact the app will have on the organization. Based on that, enterprises will be able to properly prioritize which apps should be built first.
Create an Agile API Layer to Optimize Mobile Integration
Integration is often the most time-consuming and difficult part of building an app. This leads to delays in app development, so Gartner recommends building an API layer to make integrations easier and faster. It recommends several different methods: using enterprise mobile back-end service tools, using API management tools for large-scale deployments, and using a rapid mobile app development (RMAD) tool.
Use RMAD Tools to Deliver More Apps
RMAD tools use a low-code or coding-optional approach, allowing someone with little experience to quickly build a mobile app, yet also let experienced developers add more advanced code if needed. They’re end-to-end tools for building mobile apps, including backend integration and front-end app development. Using them, the report says, can dramatically speed up mobile app development.
Use a Product Management Model for Mobile Agile Development
Companies need to treat mobile apps as products, not just apps, Gartner says. It warns, though, that doing that mean “more than just applying agile development methodologies. It should include best practices for product management in general.”
That means making sure product managers in charge of apps are given the proper authority, and ensuring that collaboration among different groups is constant and ongoing.
For more information about the Gartner report, click here.
Keep up with the latest DevTest Jargon with the latest Mobile DevTest Dictionary. Brought to you in partnership with Perfecto.

Learn Web API Using WPF, WebForms, and Xamarin

What Is the ASP.NET Web API?
ASP.NET Web API is a framework to build web APIs on top of the .NET framework, which makes it easy to build HTTP services that comprise of a range of clients, including mobile devices, web browsers, and desktop applications.
Web API is similar to ASP.NET MVC, so it contains all of MVC’s features.
• Model
• Control
• Routing
• Model binding
• Filter
• Dependency injections
HTTP is not just for serving web pages. It is also a powerful platform to build RESTful (Representational state transfer) APIs that expose services and data. HTTP is simple, flexible, and ubiquitous.
Why Use the ASP.NET Web API?
Web API can be used anywhere in a wide range of client applications (Windows and Web), Mobile devices, and browsers. Web API perfectly supports HTTP verbs (GET, POST, PUT, DELETE).

Create a Simple Web API to Send an Email
Open Visual Studio 2015. Go to New Project-> Select Visual C# -> Web and choose the project type, ASP.NET Web Application, in the pop-up.

From the pop-up, given below, we will select the Web API template.

Once the project is created, add a new API in the controllers folder. ->Right click on controllers-> add controller. Now, add Scaffold and Create as API Controller. If you’ve done it properlyl, “SendMailApiController” will appear.

Using the Namespace
If namespace is missing in your project, then add the NuGet package manager. Write the code in your method.
Write Web.Config File
I am using the Gmail domain and configuring from the Mail ID in the Web.config file.

Once you run the application, Web API REST Services are ready.

Calling the Web API Method
In this section, we’ll call the Web API method from the following:
• WPF (Native Application)
• WebForm (Web Application)
• Xamarin (Mobile Application)

Consume Web API in WPF (Native Application)
To create a WPF Application, go to New Project, select Windows and choose WPF Application.

Simply design the windows, as shown below, using XAML code, and give parameters to Email Id, Email Subject, and Email Body.

Using the Namespace Given Above
Base code
After inserting the namespace, create a new instance of the HttpClient, followed by setting its base URL to http://localhost:51537/api
This accepts header property instruction, which is sent to the server, as a response, in JSON format.
new MediaTypeWithQualityHeaderValue(“application/json”)
The URL will be set in the App.config file:
Follow the code given above, and copy and paste the button click event.
Enter any input in the WPF Window and click the Send Mail button.

Consume Web API in WebForms (Web Application)
To create a web application, go to New Project ->Visual C#->Select ASP.NET Web Application. In this pop-up, choose the Web Forms template.

Once the project is created, you can create new Web Forms and design the forms, as shown below.

The same WPF code style will be used by the Web Application. If there is no HttpClient, then get it from the NuGet Package Manager.

After installing the interface you can alter the WPF code for Web Forms, like so:
Now, run the web application and the email will be received from the email id that is configured in Web API.

Consume Web API in Xamarin (Mobile Application)
To create a mobile application, go to New Project ->Visual C#->Cross-Platform, and choose Blank App, using the Portable class library in the pop-up.

Right click on App (Portable) and add new Item->Forms Xaml Page.

Navigate to the main page in the App.xaml.cs file
Just write XML code for an Android mobile UI, as shown below.
Run the mobile application. This page appears on the mobile device or an app will open an Android Emulator Manager.

Follow the same HttpClient settings and alter the code based on mobile UI in the button clicked event.
After clicking the button, the mail is sent to the specified Email Id.
In this article, we have learned how to use Web API, by sing WPF, WebForms, and Xamarin. If you have any queries, please tell me through the comments section.