Categories
Computing

Visual Studio Code Remote Development Setup

Instructions for setting up VS Code to remote to a Linux server.

In my case, it’s Amazon Linux 2 running .net core. In Linux, at the command line, I created a project using:

dotnet new web --name Employee

Then on VS Code (running in Azure), connect to the Amazon Linux instance by:

  1. Install the Extension Pack Remote-SSH.
  2. Verify that you can ssh to the remote server using Powershell or GitBash (or another tool of choice).
  3. In the leftmost pane of Visual Studio Code, click on the Remote Explorer icon.
  4. In the SSH Targets explorer, click the Configure icon. (It looks like a gear.)
  5. In the middle of Visual Studio Code, you’ll see a prompt with several possible files. Choose whichever you prefer.
  6. Fill in the values for your setup similar to the following:
  1. In VS Code now, near the bottom left, you should see “SSH:Hostname” where Hostname is the name of the Host in config. Navigate to the desired folder to open your project.
Categories
Computing

Spinning Up .NET Core on CentOS 8


Simple steps to get going.

  1. Create the Amazon Linux Instance via the Console, CloudFormation or the CLI.
  2. Log in to the instance using SSH or EC2 Instance Connect (browser).
  3. Run the following:
sudo dnf upgrade -y
sudo dnf install git -y
sudo rpm -Uvh https://packages.microsoft.com/config/centos/8/packages-microsoft-prod.rpm 
sudo yum install dotnet-sdk-3.1 -y

# optionally test the installation
cd ~
mkdir -p projects/hello && cd projects/hello
dotnet new console
dotnet run

# optionally remove the directory
cd ~
rm -rf projects
Categories
Computing

Spinning Up .NET Core on Amazon Linux

Simple steps to get going.

  1. Create the Amazon Linux Instance via the Console, CloudFormation or the CLI.
  2. Log in to the instance using SSH or EC2 Instance Connect (browser).
  3. Run the following:
sudo yum upgrade -y
sudo yum install git -y
sudo rpm -Uvh https://packages.microsoft.com/config/centos/7/packages-microsoft-prod.rpm 
sudo yum install dotnet-sdk-3.1 -y

# optionally test the installation
cd ~
mkdir -p projects/hello && cd projects/hello
dotnet new console
dotnet run

# optionally remove the directory
cd ~
rm -rf projects

Photo by Luca Campioni on Unsplash

Categories
Computing

A Note on Connecting GitHub Webhooks with Jenkins in AWS

Jenkins is a potential CI/CD solution for my company. I’ll admit that I have it in for Azure DevOps. No one really understands it well. The person who set it up has left the company. Jenkins is likely the leader in this space.

As part of this effort, I wanted to explore GitHub Webhooks as automatic triggers for builds in Jenkins.

There are many excellent resources online for helping with setting up that Integration. Here is one. Many resources appropriately assume connectivity between a Git repo and Jenkins. That connectivity is not necessarily a given so I thought to share some issues I ran into with connectivity between my personal public GitHub repo and Jenkins box on AWS.

Using a public repo does simplify the GitHub side of the equation. In my case for the AWS side, the Jenkins server has an elastic IP that is protected by a Security Group. Only traffic originating from specific IPs is allowed, so one needs to find a way to accept only those packets that belong! The rub here is that we want to allow the traffic coming from GitHub, which was denied by design.

GitHub actually makes this fairly easy by publishing their IPs here. It simply became a matter of setting up (in my case) a dedicated Security Group to stipulate the IPs. GitHub also makes it fairly easy in the UI to test delivery and redelivery of Webhooks on the Webhooks page.

As you can see from the image above, one simply needs to click on the redelivery button to understand whether connectivity was achieved. In my case, I kept increasing restrictions in the Security Group and re-testing delivery noting the successful connection.

Sidenote – Binding GitHub and Jenkins

One interesting configuration item of note is that GitHub Webhooks point to a Jenkins server followed by /github-webhook/. They do not point directly to the URL of the Jenkins Project that will do the build. Note the Payload URL below.

In Jenkins, you do stipulate the repo to which the Project will be bound as shown below. (Those two are shown below. )There is additional configuration needed in Jenkins, but that’s beyond the scope of this article and may be found here.

Photo by Edson Rosas on Unsplash

Categories
Uncategorized

Hello world!

Welcome to WordPress. This is your first post. Edit or delete it, then start writing!

Categories
Computing

Keybinding Gnome Screenshot

There’s a nice little program for taking screenshots available for Windows and Mac called Greenshot. It’s keybound on my Windows installation with Ctrl+Alt+P.

Greenshot isn’t available on Linux. Wanting to have a consistent experience between machines with Linux’ gnome-screenshot, it was fairly easy to bind that program the same way in Linux.

  1. Launch Settings. You can search for it by pressing the Super key.
  2. In Settings, go to Devices >> Displays >> Keyboard.
  3. Scroll to the very bottom. Click the “+”.
  4. In the Set Custom Shortcut screen, set the Name, Command and Shortcut as shown below. Note that the -ia flag sets the program to start interactively and defaulted to area selection respectively.

You can review the available flags using man.

Photo by Daniel Korpai on Unsplash

Categories
Computing

AWS EBS, KMS and EC2

A graphic for the Amazon documentation “How Amazon Elastic Block Store (Amazon EBS) Uses AWS KMS“.

EBS, EC2 and KMS (1)

Categories
Computing

A Case for Automation

andy-kelly-402111-unsplashI sat on several calls today — on a day off, no less. Sigh.

We shouldn’t really even need to be making a case for automated deployments, but unfortunately I find myself (still) doing so regularly.

Today was to be spent studying for an AWS exam and doing some other productive things. Like most other responsible professionals, I don’t mind hopping on a quick call or two on a day off — in fact, I like it. It makes Monday (today is Friday) a little easier.

But today was not to be spent studying.

The first call (2 hours) was spent trying to understand why a web deployment to a production server had gone awry. Most articles on 502 errors begin by explaining that the error is hard to track down. No amount of checking the Application log seemed to help.

Dev and QA work without issue. Production appears to match in as much as I’m allowed to see. These issues occur often enough that I left several days wiggle room prior to the release. So, luckily, we have 5 days to figure things out.

On the second call, a developer on my team and I watched the SysAdmin deploy several SSIS packages. There are not only packages involved, but of course T-SQL powering those packages, shares that need to be setup, SQL Jobs, etc.

The Admin was doing a remarkable job with excellent attention to detail. He was also logging everything he was doing with screenshots and notes.

Still on one occasion, I asked him to check one of the shares he created where he added permissions to the share, but not to the folder. A missed step.

I asked him to check several additional things. Each of those was done perfectly.

A few minutes later, I remarked to the developer that one of the new Integrations wasn’t appearing on our dashboard. The script to create the database entry was missing, so the developer created it on the fly and amended our instructions. The Integration still didn’t appear. The amended script had a copy-paste error in it and still hadn’t created the correct database entry for the new integration.

We follow a fairly typical deployment pattern.

  1. It works on the developer’s machine.
  2. It works in a development environment.
  3. It works in a QA environment (servers, databases, etc) that the developer cannot access. It’s deployed to QA by the same SysAdmin who deploys to Production. A tester tests in both DEV and QA. Automated tests, where appropriate, occur.
  4. It gets deployed to Production.

The second call, observing the Production deployment, took over an hour.

The problem is not the people, at least not these two people working on the deployment. They’re both remarkably competent professionals. They have great attention to detail. They were both calm, reserved and professional the entire time.

The problem is with the process and the inability or willingness to move away from this archaic process. What’s worse is the stress that accompanies these deployments. One tiny mistake can cause an entire process to fail.

For us, this can be a failure to transmit financial information in a timely or correct fashion.

What’s needed, of course, is the ability to perform hundreds or even thousands of test deployments against identical environments. Ideally, identical environments, automated deployments, automated test scripts, then tearing the whole thing down, and going again. Inserting planned failure in the process to observe a successful rollback and notification.

*

Not entirely without coincidence, I had a slide deck open on another machine from a CloudFormation presentation given at a Chicago AWS Group that I was unable to attend.

Of course, CloudFormation and SSIS deployments aren’t really the same animal, but conceptually, the ability to automate infrastructure and the ability to automate application deployment aren’t terribly different.

*

I’ve been pushing the issue lately. Today’s deployment took a combined team time of 10 hours. Most pieces deployed fine in the end, but the single piece that didn’t means that what did get deployed can’t yet be used.

The team is hundreds of hours into the effort to modernize this process. More modern security, a better UI, regression testing over 800 files (all we had) and coordination for the rollout are all potentially delayed. That’s to say nothing of the time we all should have spent doing more productive things.

There is one thing holding us up from modernizing and that is an inability or unwillingness to improve this process.

But not for much longer.

Photo by Andy Kelly on Unsplash

Categories
Computing

Create an EC2 Instance from a Snapshot

A Use Case for this is when you have a dedicated instance that you’d like to reuse for another purpose.

There is a quicker way to clone an instance when using on-demand instances. That is by creating and using an AMI.

  • Prior to Setup
    • Determine the Availability Zone you want to use.
    • Note the Security Group of the existing EC2 instance.
  • Create a Snapshot from the instance you wish to clone.
    • On the EC2 Dashboard, navigate to Elastic Block Store >> Volumes.
    • Identify the Instance you wish to clone in the Attachment Information column.
    • Actions >> Create Snapshot
  • Create a Volume from the Snapshot.
    • On the EC2 Dashboard, navigate to Elastic Block Store >> Snapshots.
    • Identify the Snapshot from which you wish to create a Volume.
    • Actions >> Create Volume
    • IMPORTANT!: Choose the desired Availability Zone.
    • Note the Volume Id.
  • Create an EC2 Instance in the desired Availability Zone.
    • Choose the same Security Group as the modeled instance.
    • Follow the steps in the Wizard.
    • Once the Instance has started, stop the instance.
  • Force Detach the existing Volume from the Instance.
    • On the EC2 Dashboard, navigate to Elastic Block Store >> Volumes.
    • Actions >> Force Detach
    • Optional but recommended. Actions >> Delete
  • Attach the cloned Volume.
    • On the EC2 Dashboard, navigate to Elastic Block Store >> Volumes.
    • Actions >> Attach
    • Choose the desired instance and specify the Device.
  • Start the EC2 Instance.
    • On the EC2 Dashboard, navigate to Instances >>Instances.
    • Actions >> Instance State >> Start

Screen Shot 2018-09-15 at 7.03.20 AM

Feature Photo by Robin Spielmann on Unsplash

Categories
Computing

Auto Scaling Metrics: A Brief Overview

chris-liverani-533561-unsplash

While studying for the AWS Solutions Architect Professional exam, I wanted to memorize the Auto Scaling Metrics. It was suggested that knowing these metrics cold would be helpful on the exam.

During a review, it became obvious that for Auto Scaling metrics, there were three broad categories of Size, Capacity, and Instances. Each Category could be further explained by some Detail.

Metric Category Detail
GroupMinSize Size Min
GroupMaxSize Size Max
GroupDesiredCapacity Capacity Desired
GroupInServiceInstances Instances Desired
GroupPendingInstances Instances Desired
GroupStandbyInstances Instances Desired
GroupTerminatingInstances Instances Desired
GroupTotalInstances Instances Desired

For the Instances Category, there are some interesting distinctions to be made among the set. InService discludes pending and terminating. Total includes InServicePending and TerminatingStandby includes instances running, but not InService.

Recall from Lifecycle Hooks the general states for an EC2 instance:

Pending –> InService –>Terminating.

If we then order the metrics: GroupPendingInstances, GroupInServiceInstance and GroupTerminatingInstances, we can then easily add the remaining Total and Standby metrics for overall clarity.

Photo by Chris Liverani on Unsplash