Monday, September 25, 2023

Bulk add members in to MS Teams

 MS teams natively does not allow to bulk add members to new or existing teams. The process to add members one by one is pretty troublesome if you need to add 100+ members in one go. Powershell comes to you help in such a scenario.

First install the Microsoft Teams module in Powershell:

Install-Module -Name MicrosoftTeams

In case, its already installed but is of an older version, try to update it using:

Install-Module -Force -Name MicrosoftTeams

Once done, create a simple csv file with 2 columns : Email, Role and fill it up.

Next we need the group Id of the Team to which you want to add the members to. Yes there is powershell command for it as well but it was pretty slow specially when you member of 100's of Teams !!

Simple way was to pull it from the browser window. Just open Teams on the browser window and select the target Team, and look for the param groupId in the url:



Once done, just execute the below script:

#Get users from the CSV

$TeamUsers = Import-Csv -Path "<file_path>"

$TeamID = "<Team_Id / groupId>"

 

#Iterate through each user from the CSV and add to Teams

$TeamUsers | ForEach-Object {

       Add-TeamUser -GroupId $TeamID -User $_.Email -Role $_.Role

       Write-host "Added User:"$_.Email -f Green

}

You should see members getting added one by one:




Sunday, July 23, 2023

Save space on Google Photos by compressing images

For the last few months, I was getting the Google warning that my 15GB of free space was running out and I needed to buy more storage. Upon analysing my space consumption on Google One I realized bulk on my space was being used by Photos. Last I remember it was all free so I had put my android phone photos to be automatically backed up on Google cloud for free.

Then I saw that the policy was updated on June 1, 2021 which not only counts storage for the original high quality images but also the compressed storage saver images. The good thing was that any photo uploaded before June 1, 2021 was not to be counted against the storage.

So first thing I stopped the automated backup and was looking for a day of free time to work out a technical solution. Like I believe most of us, I don't print any poster size photographs. Last I took a print may be decade back. Most of it was for digital consumption i.e. on phone, tab, laptop and at best TV. The resolution required was at best 4K for TV but certainly not the 12 MP per photo which my phone camera was capturing. The solution was to somehow use JPG compression to reduce the photo size but that was not available on the cloud.

So the process I used was:

  1. Downloading photos to PC is cumbersome so I relied on the Google Photos android app. To do select say a day's or month's worth of photos -> select share -> select Copy To. This will force the photos to be download (in original quality on selecting that option) on to your phone's local storage. Let it download and then save in to a preferred folder.
  2. Next connect PC to phone using FTP and download the folder on to the PC.
  3. Next use Irfanview's batch processing to save it as JPG with 90% quality (even 80% is fine). Remember to retain the EXIF, IPTC, etc data




















  4. Once batch is completed, move the output images to the original folder so that the original photos get overwritten. If you wish to back the originals locally - this is the time.
  5. Now the issue is that the date created and date modified is updated to current date and time. If you upload this, this will mess up the Google photos layout. So we need to update the created and last modified timestamps.
  6. Powershell helps us to achieve it:

    $modifyfiles = Get-ChildItem -force *.jpg| Where-Object {! $_.PSIsContainer}
    foreach($object in $modifyfiles)
    {
       $object.CreationTime=[datetime]::ParseExact($object.BaseName.Substring(4),"yyyyMMdd_HHmmss",$null)
      $object.LastWritetime=[datetime]::ParseExact($object.BaseName.Substring(4),"yyyyMMdd_HHmmss",$null)
    }


  7. Once done, FTP it back to the phone and use Google photos.
  8. In Photos, now you will see the images in the cloud tagged with a cloud symbol whereas the local one wont have the symbol.
  9. Delete the ones on the cloud and let it complete.
  10. They backup the rest to the cloud.

And its all done. Save a few GB's doing it :)

Restoring Images Exif Data after upload to Google Photos

 Recently my Google free storage of 15GB is getting full so wished to get the photos back to my local system. Yes we can quickly download it but found that Google has wiped off of the Exif data, specially the dates.


So to restore it, luckily it still retained the file names like : IMG_20230508_110348.jpg

So quickly created the below script and it worked :)


SETLOCAL ENABLEDELAYEDEXPANSION

for %%c in (*.jpg) do ( 

set a=%%~nc

set x=!a:~4,15!

set y=!x:~0,4!:!x:~4,2!:!x:~6,2! !x:~9,2!:!x:~11,2!:!x:~13,2!

echo !y!

"<exiv2_location>\exiv2" -k -M"set Exif.Image.DateTime Ascii !y!" %%c

)


Also it need a PowerShell script to update the created Date and Modified date:


$modifyfiles = Get-ChildItem -force *.jpg| Where-Object {! $_.PSIsContainer}

foreach($object in $modifyfiles)

{

$object.CreationTime=[datetime]::ParseExact($object.BaseName.Substring(4),"yyyyMMdd_HHmmss",$null)

$object.LastWritetime=[datetime]::ParseExact($object.BaseName.Substring(4),"yyyyMMdd_HHmmss",$null)

}

Thursday, July 8, 2021

AWS : NodeJS : Hello World on EC2

 Although I have an account in AWS for over a decade, I really did not do much with it other than launching a EC2 instance and then say print Hello World.

A decade has passed and now AWS has caught up with me professionally so need to build come competencies around it.

First is to get the the Hello world back again using Node on a Web Page.

Assumptions:

  1. You have a AWS account. I am using the free tier one.
  2. Have basic understanding of Node
  3. Have basic understanding of AWS specially EC2, VPC and ACLs.

Steps:
  1. Go to EC2. Select "Amazon Linux 2 AMI (HVM), SSD Volume Type" of type "t2.micro" and launch it. 

  2. You will get a public key .pem file. Keep it safely.

  3. On Windows, you will need to set the permissions of the file correctly to mimic chmod 400 of linux:
    1. icacls.exe key.pem /reset
    2. icacls.exe key.pem /grant:r "$($env:username):(r)"
    3. icacls.exe key.pem /inheritance:r

  4. Go to Security Group and Edit Inbound rule to allow:
    1. SSH (port 22) from anywhere
    2. HTTP (port 3000) from anywhere

      This will allow you to connect to the EC2 instance from your local laptop as well as browse the Hello World node site in there.

  5. Coming back to EC2 dashboard, you should see 1 Instance running

  6. You should be able to connect to it using SSH client and see the welcome logo:
           __|  __|_  )
           __|  __|_  )
           _|  (     /   Amazon Linux 2 AMI
          ___|\___|___|
    https://aws.amazon.com/amazon-linux-2/

  7. Now that you have a shell access to your EC2 instance, lets install Node JS.

  8. We will use nvm for it.

  9. curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.38.0/install.sh | bash

  10. Activate nvm using simply bash or . ~/.nvm/nvm.sh

  11. nvm install node

  12. check node installed:
    node -v
    v16.4.2

  13. Now let just start a simple Express web app
    mkdir myapp
    cd myapp
    npm init (and just enter till the end)
    npm install express --save
    vi app.js
    const express = require('express')
    const app = express()
    const port = 3000

    app.get('/', (req, res) => {
      res.send('Hello World!')
    })

    app.listen(port, () => {
      console.log(`Example app listening at http://localhost:${port}`)
     
    })
  14. node app.js
    Example app listening at http://localhost:3000
  15. So our web app is up are running on 3000 but its still within the EC2 instance. To be able to access it from the internet, we need to modify the security group of the EC2 instance to allow the port 3000 be exposed. This has been done in Step 2.

  16. So access the web app using the public IP or hostname of the EC2 instance. Note to use http:// and not https:// and append the port 3000 at the end. It should show up as :

    Hello World!
Thats all for today. Next we will connect the mighty Database :)

Wednesday, April 14, 2021

Kubernetes : Getting started on Windows

I had setup Kubernetes on local laptop and had tried a few things 2-3 years back. In the meantime the laptop got replaced and recently got some work around it. So it was time to set it up again.

I found that Docker Desktop does now support Kubernetes so thought it will be pretty easy to get up back again. So went to Docker Desktop -> Settings -> Kubernetes and selected Enable Kubernetes. In a few minutes it got enabled and on the command prompt I got kubectl. Wow, its all done.

Now the problems started.

First, when I executed kubectl version, I did get the client version but for the server version I got:

kubectl unable to connect to server: x509: certificate signed by unknown authority

So spent quite a few hours trying to work this out. There is enough documentation on the web but nothing specific to docker desktop. So despite the trials and errors, it did not work so abandoned it and went back to my earlier trusty minikube.

First had to remove Kubernetes from Docker Desktop but it was not allowing me to do it !!. So had to uninstall the whole of Docker Desktop.

Then installed the following using Chocolatey:

  1. Install kubectl : choco install kubernetes-cli
  2. Install Minikube : choco install minikube
Then installed Docker Desktop as well as had uninstalled it earlier.

All done, lets check all things are fine:

  1. Start the cluster : minikube start
  2. Check kubectl : kubectl version

    Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:31:21Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"windows/amd64"}
    Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2", GitCommit:"faecb196815e248d3ecfb03c680a4507229c2a56", GitTreeState:"clean", BuildDate:"2021-01-13T13:20:00Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}

  3. Check cluster : kubectl cluster-info

    Kubernetes control plane is running at https://127.0.0.1:61152
    KubeDNS is running at https://127.0.0.1:61152/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

  4. All good, lets start the dashboard : minikube dashboard

    * Enabling dashboard ...
      - Using image kubernetesui/dashboard:v2.1.0
      - Using image kubernetesui/metrics-scraper:v1.0.4
    * Verifying dashboard health ...
    * Launching proxy ...
    * Verifying proxy health ...
    * Opening http://127.0.0.1:60036/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ in your default browser...

    Open the browser and view the brand new Kubernetes cluster in all its glory :)

So we have our K8 cluster setup. Lets start some sample applications just to test it out and revise some of the old stuff. So I tried to setup nginx as will be working on application which will have a HTTP interface.

So first create the nginx.yml file for the deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx
  name: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx:latest
        name: nginx

and apply it on the K8 cluster : 

> kubectl apply -f nginx.yml
deployment.apps/nginx created

Go to the MiniKube dashboard and under deployments, you will find:


 Under Pods, you will find:


The equivalent command line will be 
  • kubectl get deployments
  • kubectl get pods

So all good, we have deployed nginx which is running in a container within pod. Still we need to expose nginx on a port in the container so that it is accessible. Update the yaml file to:

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx
  name: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx:latest
        name: nginx
        ports:
        - containerPort: 80
          name: nginx

and apply it : kubectl apply -f nginx.yml

So now we have set port 80 where it will receive the request and forward it to nginx. Still this port is exposed within the K8 cluster and not to outer world. To expose it to outer world or host, we need to create a service which will do so. 

So create the nginx_service.yml as:

apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  selector:
    app: nginx
  ports:
    - protocol: TCP
      targetPort: 80
      port: 8082

and apply it :
>kubectl apply -f nginx_service.yml
service/nginx created

In the minikube dashboard, you will find:

As you can well see that it was exposed the port 8082 for the app: nginx

Now you can check it on your local browser : http://localhost:8082 or http://127.0.0.1:8082/

All good. Now get will app development and deployment on K8 cluster :)

Saturday, June 6, 2020

How to Set Up a WRT54G as a Repeater

I have been using Linksys WRT54G for over 15 years now and it has been an excellent router.

Over time the broadband speeds have increased and my current broadband plan supports 200 Mbps. My router has an official peak speed of 54 Mbps but practically even if I sit next to it, I get around 20 Mbps which falls to less than 10Mbps in the next room and by the time I reach my 3rd bedroom, the signal is almost gone. So I felt, I am losing out on the speed I am paying for.

Another issue was the increase of device counts connecting to the router. Earlier it used to be just my laptop. Now we have 3 laptops, 2 smart phones, 1 Smart TV and the count will simply increase in future. In here my router was unable to cope up with. So if my child is watching YouTube on TV in FullHD , (in lock down due to Corona), my Skype and Team calls get impacted although I have enough bandwidth to support all in parallel.

So it was time for an upgrade. Did my research and finally concluded on TP-Link Archer C60 AC1350 primarily due to:

  1. Wanted a dual band router as it can support up to 867 Mbps @ 5 GHz. I don't think I will cross that speed in next 3-4 years
  2. It support MU-MIMO which will help support multiple devices in parallel
  3. Has 5 Antennas
  4. Cost was Rs 2200 on Amazon which was like 300 more than next model but had more features.
With the new router coming in, I did not want to throw my old router as the new one still did not reach my 3rd bedroom well. So started looking up on Google to convert my Linksys WRT54G in to a repeated. And yes it was possible.

The 2 reference links are :
Although the links are pretty accurate, there are some misses in there which I believe is more like article was written in the past and things have changed.

To flash the router and install DD-WRT on it

  1. DO the hard reset exactly as its said i.e. 30-30-30. Actually I did like 60-60-60 to be safe and all went fine.
  2. The GV5Flash.zip download is missing vximgtoolgui executable although the folder name is the same. The executable inside the folder is wrt_vx_imgtool which is like command line version of the tool . To get the GUI, use this url : http://web.archive.org/web/20060708130642/http://www.bitsum.com/files/vximgtoolgui.zip
Configure the  repeater

  1. In here skip the 'Reset Router' and 'Update Firmware' sections
  2. Start from 'Set Static IP Address'
  3. if you favour command line, you can use
    netsh interface ip set address name="Ethernet" static 192.168.1.9 255.255.255.0 192.168.1.1
    netsh interface ip set address name="Ethernet" dhcp
  4. Change Router Settings : Step 7 - You can put a better SSID than 'bridge' like _rpt or _ext
  5. An important step missed in here is to secure the repeater itself. To do so, go to Wireless :: Wireless Security Tab :: Virtual Interfaces section. In there you can set security mode for the repeater like WPA2 Personal and the key.

Now a bigger issue is once you have setup the repeater, there is no way you can access it over WiFi as its a plain proxy. If you try http://192.168.0.1/ , you will end up with the primary router. and there is no http://192.168.1.1/. So to access it back again, 
  1. you will need to setup you local laptop with static IP :
    netsh interface ip set address name="Ethernet" 
  2. Connect your laptop to the repeater using the Ethernet cable
  3. Now try http://192.168.1.2/ as the repeaters own address is 1.2 with 1.1. being the gateway (as configured in Step 3 of 'Change Router Settings'

So all good, I ended up with 3 SSIDs:
  1. Primary 2.4 GHz
  2. Primary 5 GHz
  3. Repeater 2.4 GHz
and yes my whole house is now covered :)

Monday, September 23, 2019

GIT Commit Stats using Command Line to analyse in Excel

git log --all --no-merges  --pretty=format:"%aE - %aI : %s" --after="2019-04-01 00:00"  > git.log