Playing with G Suite MDM mobile device data using Go

Our desktop support & G Suite admin folks needed a simple, fast command-line tool to query basic info about our company’s mobile devices (which are all managed using G Suite’s built-in MDM).

So I wrote one.

Since this tool needs to be run via command-line, it can’t use any interactive or browser-based authentication, so we need to use a service account for authentication.

Pre-requisites (GCP & G Suite):

  • Create a service account in your G Suite domain (Google Cloud Console –> IAM & Admin –> Service Accounts)
  • Be sure that you have checked the “Enable G Suite Domain-wide Delegation” box as this will give your new service account the appropriate permissions to query the G Suite Admin API
  • Download the service account JSON credentials file. Open it up and find the client_id. Make a note of it!
  • Now we need to authorize the client_id of your service account for the necessary API scopes. In the Admin Console for your G Suite domain (Admin Console –> Security –> Advanced Settings –> Authentication –> Manage API Client Access), add your client_id in the “Client Name” box, and add
    https://www.googleapis.com/auth/admin.directory.device.mobile.readonly
    in the “One or more API scopes” box
  • Click the “Authorize” button when you’re finished

Pre-requisites (Go)
You’ll need to “go get” a few packages:

  • go get -u golang.org/x/oauth2/google
  • go get -u google.golang.org/api/admin/directory/v1
  • go get -u github.com/dustin/go-humanize

Pre-requisites (Environment Variables)
Because it’s never good to store runtime configuration within code, you’ll notice that the code references several environment variables. Setup them up to suit your preference but something like this will do:

export GSUITE_COMPANYID="A01234567"
export SVC_ACCOUNT_CREDS_JSON="/home/rickt/dev/adminsdk/golang/creds.json
export GSUITE_ADMINUSER="[email protected]"

And finally, the code
Gist URL: https://gist.github.com/rickt/199ca2be87522496e83de77bd5cd7db2

package main
// https://rickt.org/2019/01/08/playing-with-g-suite-mdm-mobile-device-data-using-go/
import (
"context"
"errors"
"flag"
"fmt"
"github.com/dustin/go-humanize"
"golang.org/x/oauth2/google"
"google.golang.org/api/admin/directory/v1"
"io/ioutil"
"log"
"os"
"strings"
"time"
)
// runtime usage:
// Usage of ./mdmtool:
// -all
// List all MDM mobile devices
// -imei string
// IMEI of mobile device to search fors
// -name string
// Name of mobile device owner to search for
// -sn string
// Serial number of mobile device to search for
// -status string
// Search for mobile devices with specific status
var (
adminuser = os.Getenv("GSUITE_ADMINUSER")
companyid = os.Getenv("GSUITE_COMPANYID")
credsfile = os.Getenv("SVC_ACCOUNT_CREDS_JSON")
devices *admin.MobileDevices
row int = 0
scopes = "https://www.googleapis.com/auth/admin.directory.device.mobile.readonly"
searchtype = "all" // default search type
sortorder = "name" // default sort order
)
// flags
var (
all *bool = flag.Bool("all", false, "List all MDM mobile devices")
imei *string = flag.String("imei", "", "IMEI of mobile device to search for")
name *string = flag.String("name", "", "Name of mobile device owner to search for")
sn *string = flag.String("sn", "", "Serial number of mobile device to search for")
status *string = flag.String("status", "", "Search for mobile devices with specific status")
)
// helper func to check errors
func checkError(err error) {
if err != nil {
log.Fatal(err)
}
}
// check the command line arguments/flags
func checkFlags() (string, error) {
// parse the flags
flag.Parse()
// show all devices?
if *all == true {
// -all shows ALL devices so doesn't work with any other option
if *name != "" || *imei != "" || *sn != "" || *status != "" {
return "", errors.New("Error: -all cannot be used with any other option")
}
return "all", nil
}
// name search
if *name != "" {
// don't use -name and any other search option
if *imei != "" || *sn != "" || *status != "" {
return "", errors.New("Error: cannot use -name and any other search options")
}
return "name", nil
}
// IMEI search
if *imei != "" {
// don't use -imei and any other search option
if *name != "" || *sn != "" || *status != "" {
return "", errors.New("Error: cannot use -imei and any other search options")
}
return "imei", nil
}
// Serial number search
if *sn != "" {
// don't use -sn and any other search option
if *name != "" || *imei != "" || *status != "" {
return "", errors.New("Error: cannot use -sn and any other search options")
}
return "sn", nil
}
// Status search
if *status != "" {
// don't use -status and any other search option
if *name != "" || *imei != "" || *sn != "" {
return "", errors.New("Error: cannot use -status and any other search options")
}
return "status", nil
}
// invalid search
if *all == false && *name == "" && *imei == "" && *sn == "" && *status == "" {
flag.PrintDefaults()
return "", errors.New("Error: no search options specified")
}
return "", nil
}
// helper function to do a case-insensitive search
func ciContains(a, b string) bool {
return strings.Contains(strings.ToUpper(a), strings.ToUpper(b))
}
func main() {
// check the flags to determine type of search
searchtype, err := checkFlags()
checkError(err)
// read in the service account's JSON credentials file
creds, err := ioutil.ReadFile(credsfile)
checkError(err)
// create JWT config from the service account's JSON credentials file
jwtcfg, err := google.JWTConfigFromJSON(creds, scopes)
checkError(err)
// specify which admin user the API calls should "run as"
jwtcfg.Subject = adminuser
// make the client using our JWT config
gc, err := admin.New(jwtcfg.Client(context.Background()))
checkError(err)
// get the data
devices, err = gc.Mobiledevices.List(companyid).OrderBy(sortorder).Do()
checkError(err)
// iterate through the slice of devices
for _, device := range devices.Mobiledevices {
// what type of search are we doing?
switch searchtype {
// show all mobile devices
case "all":
row++
printDeviceData(device)
// name search: iterate through the slice of names associated with the device
case "name":
for _, username := range device.Name {
// look for the specific user
if ciContains(username, *name) {
row++
printDeviceData(device)
}
}
// IMEI search: look for a specific IMEI
case "imei":
// remove all spaces from IMEI then search for specific IMEI
// IMEI can be misreported via MDM with spaces, so remove them
if strings.Replace(device.Imei, " ", "", -1) == strings.Replace(*imei, " ", "", -1) {
row++
printDeviceData(device)
break
}
// serial number search: look for a specific serial number
// SN can be misreported via MDM with spaces, so remove them
case "sn":
if strings.Replace(device.SerialNumber, " ", "", -1) == strings.Replace(*sn, " ", "", -1) {
row++
printDeviceData(device)
break
}
// Status search
case "status":
if ciContains(device.Status, *status) {
row++
printDeviceData(device)
}
}
}
// if 0 rows returned, exit
if row == 0 {
log.Fatal("No mobile devices match specified search criteria")
} else {
// print the final/closing line of dashes
printLine()
}
fmt.Printf("%d row(s) of mobile device data returned.\n", row)
}
// func to print out device data
func printDeviceData(device *admin.MobileDevice) {
// print header only on first row of data
if row == 1 {
printHeader()
}
// convert last sync string to time.Time so we can humanize the last sync timestamp
t, err := time.Parse(time.RFC3339, device.LastSync)
checkError(err)
fmt.Printf("%-16.16s | %-14.14s | %-16.16s | %-18.18s | %-13.13s | %-18.18s | %-20.20s\n", device.Model, device.Os, strings.Replace(device.SerialNumber, " ", "", -1), strings.Replace(device.Imei, " ", "", -1), device.Status, humanize.Time(t), device.Name[0])
return
}
// func to print a line
func printLine() {
// print a line
fmt.Printf("—————–+—————-+——————+——————–+—————+——————–+—————\n")
}
// func to print the header
func printHeader() {
// print the first line of dashes
printLine()
// print header line
fmt.Printf("Model | OS & Version | Serial # | IMEI | Status | Last Sync | Owner\n")
// print a line of dashes under the header line
printLine()
}
// EOF

Automated insert of CSV data into Bigquery via GCS bucket + Python

i wanted to try out the automatic loading of CSV data into Bigquery, specifically using a Cloud Function that would automatically run whenever a new CSV file was uploaded into a Google Cloud Storage bucket.

it worked like a champ. here’s what i did to PoC:

  1. generate a CSV file with 1000 lines of dummy data via https://www.mockaroo.com/ with their default schema so it looks like:
$ head -3 testdata.csv

id,first_name,last_name,email,gender,ip_address

1,Andres,Berthot,[email protected],Male,104.204.241.0

2,Iris,Zwicker,[email protected],Female,61.87.224.4
  • make a test Google Cloud Storage bucket:
     $ gsutil mb gs://csvtestbucket 
  • install the necessary python bits & pieces:
    $ pip3 install google-cloud-bigquery --upgrade
  • make a Bigquery dataset:
    $ bq mk --dataset rickts-dev-project:csvtestdataset
  • make a table within that dataset to match the CSV schema:
  • $ bq mk -t csvtestdataset.csvtable \
    
    id:INTEGER,first_name:STRING,last_name:STRING,email:STRING,gender:STRING,ip_address:STRING
  • eyeball the table in the Bigquery dataset and verify it is clean and fresh:
  • now its time to write some python. here’s mine:
  • import os
    from google.cloud import bigquery
    def csv_loader(data, context):
    client = bigquery.Client()
    dataset_id = os.environ['DATASET']
    dataset_ref = client.dataset(dataset_id)
    job_config = bigquery.LoadJobConfig()
    job_config.schema = [
    bigquery.SchemaField('id', 'INTEGER'),
    bigquery.SchemaField('first_name', 'STRING'),
    bigquery.SchemaField('last_name', 'STRING'),
    bigquery.SchemaField('email', 'STRING'),
    bigquery.SchemaField('gender', 'STRING'),
    bigquery.SchemaField('ip_address', 'STRING')
    ]
    job_config.skip_leading_rows = 1
    job_config.source_format = bigquery.SourceFormat.CSV
    # get the URI for uploaded CSV in GCS from 'data'
    uri = 'gs://' + os.environ['BUCKET'] + '/' + data['name']
    # lets do this
    load_job = client.load_table_from_uri(
    uri,
    dataset_ref.table(os.environ['TABLE']),
    job_config=job_config)
    print('Starting job {}'.format(load_job.job_id))
    print('Function=csv_loader, Version=' + os.environ['VERSION'])
    print('File: {}'.format(data['name']))
    load_job.result() # wait for table load to complete.
    print('Job finished.')
    destination_table = client.get_table(dataset_ref.table(os.environ['TABLE']))
    print('Loaded {} rows.'.format(destination_table.num_rows))
    view raw main.py hosted with ❤ by GitHub
  • because you don’t want to hardcode things like bucket/table/dataset names in code, create a yaml file that will store your deployment-specific configuration in environment variables
  •  BUCKET: csvtestbucket
    
    DATASET: csvtestdataset
    
    TABLE: csvtable
    
    VERSION: v14
  • create a requirements.txt file for the necessary imports
  • google-cloud
    
    google-cloud-bigquery
  • create a .gcloudignore file so that your yaml or CSV files will not be deployed into GCP
  • *csv
    
    *yaml
  • at this point, your folder should look something like this:
  • $ ls
    
    env.yaml  main.py  requirements.txt  testdata.csv
  • now we are ready to deploy the cloud function, we will add a trigger on the storage bucket that will fire every time a new file is added to the bucket. here we are creating a cloud function named “csv_loader”:
  • $ gcloud beta functions deploy csv_loader \
    
    --runtime=python37 \
    
    --trigger-resource=gs://csvtestbucket \
    
    --trigger-event=google.storage.object.finalize \
    
    --entry-point=csv_loader \
    
    --env-vars-file=env.yaml
  • ok, the function is deployed, yay! copy your test data CSV into the test bucket:
  • $ gsutil cp testdata.csv gs://csvtestbucket/
    
    Copying file://testdata.csv [Content-Type=text/csv]...
    
    - [1 files][ 60.4 KiB/ 60.4 KiB]
    
    Operation completed over 1 objects/60.4 KiB.  
  •  now that we have copied a CSV file in the bucket, the function should fire! check the cloud function logs:
  • $ gcloud functions logs read
    
    [ ... snipped for brevity ... ]
    
    D      csv_loader  274732139359754  2018-10-22 20:48:27.852  Function execution started
    
    I      csv_loader  274732139359754  2018-10-22 20:48:28.492  Starting job 9ca2f39c-539f-454d-aa8e-3299bc9f7287
    
    I      csv_loader  274732139359754  2018-10-22 20:48:28.492  Function=csv_loader, Version=v14
    
    I      csv_loader  274732139359754  2018-10-22 20:48:28.492  File: testdata2.csv
    
    I      csv_loader  274732139359754  2018-10-22 20:48:31.022  Job finished.
    
    I      csv_loader  274732139359754  2018-10-22 20:48:31.136  Loaded 1000 rows.
    
    D      csv_loader  274732139359754  2018-10-22 20:48:31.139  Function execution took 3288 ms, finished with status: 'ok'

    looks like the function ran as expected!

  • lets eyeball the bigquery table again, and see if the row count has changed
  • $ bq show csvtestdataset.csvtable
    
    Table rickts-dev-project:csvtestdataset.csvtable
    
    Last modified           Schema           Total Rows   Total Bytes   Expiration   Time Partitioning   Labels
    
    ----------------- ----------------------- ------------ ------------- ------------ ------------------- --------
    
    22 Oct 13:48:29   |- id: integer          1000         70950
    
    |- first_name: string
    
    |- last_name: string
    
    |- email: string
    
    |- gender: string
    
    |- ip_address: string

    great! there are now 1000 rows. looking good.

  • as a final check, lets compare the first 3 rows of the CSV
  • $ egrep '^[1,2,3],' testdata.csv
    
    1,Andres,Berthot,[email protected],Male,104.204.241.0
    
    2,Iris,Zwicker,[email protected],Female,61.87.224.4
    
    3,Aime,Gladdis,[email protected],Female,29.55.250.191

    with the first 3 rows of the bigquery table

    $ bq query 'select * from csvtestdataset.csvtable \
    
    where id IN (1,2,3)'
    
    Waiting on bqjob_r6a3239576845ac4d_000001669d987208_1 ... (0s) Current status: DONE
    
    +----+------------+-----------+---------------------------+--------+---------------+
    
    | id | first_name | last_name |           email           | gender |  ip_address   |
    
    +----+------------+-----------+---------------------------+--------+---------------+
    
    |  1 | Andres     | Berthot   | [email protected]         | Male   | 104.204.241.0 |
    
    |  2 | Iris       | Zwicker   | [email protected]         | Female | 61.87.224.4   |
    
    |  3 | Aime       | Gladdis   | [email protected] | Female | 29.55.250.191 |
    
    +----+------------+-----------+---------------------------+--------+---------------+

    and whaddyaknow, they match! w00t!

    proof of concept: complete!

    conclusion: cloud functions are pretty great.

    PoC: stream nginx access logs into Bigquery

    lets say you have some servers in a cluster serving vhost foo.com and you want to put all the access logs from all the webservers for that vhost into Bigquery so you can perform analyses, or you just want all the access logs in one place.

    in addition to having the raw weblog data, you also want to keep track of which webserver the hits were served by, and what the vhost (Host header) was.

    so, foreach() server, we will install fluentd, configure it to tail the nginx access log, and upload everything to Bigquery for us.

    it worked like a champ. here’s what i did to PoC:

    1. Install fluentd
      $ curl -L https://toolbelt.treasuredata.com/sh/install-ubuntu-xenial-td-agent3.sh | sh
      
    2. Create a Bigquery dataset
       $ bq mk --dataset rickts-dev-project:nginxweblogs
      Dataset 'rickts-dev-project:nginxweblogs' successfully created.
    3. Create a JSON schema to handle the weblogs + server hostname + vhost name
      [
        {
          "name": "agent",
          "type": "STRING"
        },
        {
          "name": "code",
          "type": "STRING"
        },
        {
          "name": "host",
          "type": "STRING"
        },
        {
          "name": "method",
          "type": "STRING"
        },
        {
          "name": "path",
          "type": "STRING"
        },
        {
          "name": "referer",
          "type": "STRING"
        },
        {
          "name": "size",
          "type": "INTEGER"
        },
        {
          "name": "user",
          "type": "STRING"
        },
        {
          "name": "time",
          "type": "INTEGER"
        },
        {
          "name": "hostname",
          "type": "STRING"
        },
        {
          "name": "vhost",
          "type": "STRING"
        }
      ]
    4. Create a table in the Bigquery dataset to store the weblog data
      $ bq mk -t nginxweblogs.nginxweblogtable schema.json
      Table 'rickts-dev-project:nginxweblogs.nginxweblogtable' successfully created.
    5. Install the fluentd Google Bigquery plugins
      $ sudo /usr/sbin/td-agent-gem install fluent-plugin-bigquery --no-ri --no-rdoc -V
    6. Configure fluentd to read the nginx access log for this vhost and upload to Bigquery (while also adding the server hostname and vhost name) by creating an /etc/td-agent/td-agent.conf similar to this: https://gist.github.com/rickt/641e086d37ff7453b7ea202dc4266aa5 (unfortunately WordPress won’t render it properly, sorry)

      You’ll note we are using the record_transformer fluentd filter plugin to transform the access log entries with the webserver hostname and webserver virtualhost name before injection into Bigquery.

    7. After making sure that the user fluentd runs as (td-agent by default) has read access to your nginx access logs, start (or restart) fluentd
       $ sudo systemctl start td-agent.service
    8. Now make a call to your vhost (in my case, localhost)
       $ hostname
      hqvm
      $ curl http://localhost/index.html?text=helloworld
      you sent: "helloworld"
    9. Query Bigquery to look for that specific hit, first using the bq command line tool
       $ bq query 'SELECT * FROM nginxweblogs.nginxweblogtable WHERE path = "/index.html?text=helloworld"'
      +-------------+------+------+--------+-----------------------------+---------+------+------+------+----------+--------------------------+
      |    agent    | code | host | method |            path             | referer | size | user | time | hostname |          vhost           |
      +-------------+------+------+--------+-----------------------------+---------+------+------+------+----------+--------------------------+
      | curl/7.47.0 | 200  | ::1  | GET    | /index.html?text=helloworld | -       |   14 | -    | NULL | hqvm     | rickts-dev-box.fix8r.com |
      +-------------+------+------+--------+-----------------------------+---------+------+------+------+----------+--------------------------+
    10. Congratulations, you have just setup your web access logs to inject to a Bigquery table!

    proof of concept: complete!

    conclusion: pushing your web access logs into Bigquery is extremely easy, not to mention, a smart thing to do.

    the benefits exponentially increase as your server + vhost count increases. try consolidating, compressing and analyzing logs from N+ servers using months of data in-house and you’ll see the benefits of Bigquery right away.

    enjoy!

    a simple slack weather bot

    it’s been really hot in Los Angeles recently, and i realised i was switching back to my web browser from the Slack app to find out the current temperature downstairs an awful lot before leaving the office.

    i realised that a /weather command would far more efficient. say hello to slack-weather-bot.

    a simple golang backend that take a request for /weather?zip=NNNNN and then posts a quick one-liner of current conditions back to you on Slack. enjoy.

    https://github.com/rickt/slack-weather-bot


    package slackweatherbot
    import (
    owm "github.com/briandowns/openweathermap"
    "golang.org/x/net/context"
    "google.golang.org/appengine"
    "google.golang.org/appengine/log"
    "google.golang.org/appengine/urlfetch"
    "net/http"
    "os"
    "strconv"
    "text/template"
    )
    const (
    weatherTemplate = `It's currently {{.Main.Temp}} °F ({{range .Weather}} {{.Description}} {{end}}) `
    )
    // get the current weather conditions from openweather
    func getCurrent(zip int, units, lang string, ctx context.Context) *owm.CurrentWeatherData {
    // create a urlfetch http client because we're in appengine and can't use net/http default
    cl := urlfetch.Client(ctx)
    // establish connection to openweather API
    cc, err := owm.NewCurrent(units, lang, owm.WithHttpClient(cl))
    if err != nil {
    log.Errorf(ctx, "ERROR handler() during owm.NewCurrent: %s", err)
    return nil
    }
    cc.CurrentByZip(zip, "US")
    return cc
    }
    // redirect requests to / to /weather
    func handler_redirect(w http.ResponseWriter, r *http.Request) {
    http.Redirect(w, r, "/weather", 302)
    }
    // handle requests to /weather
    // currently supports parameter of ?zip=NNNNNN or no zip parameter, in which case DEFAULT_ZIP is used
    func handler_weather(w http.ResponseWriter, r *http.Request) {
    // create an appengine context so we can log
    ctx := appengine.NewContext(r)
    // check the parameters
    zip := r.URL.Query().Get("zip")
    switch zip {
    // if no zip parameter given, get the DEFAULT_ZIP from our env vars
    case "":
    zip = os.Getenv("DEFAULT_ZIP")
    }
    // convert the zip string to an int because that's what openweather wants
    var zipint int
    zipint, err := strconv.Atoi(zip)
    if err != nil {
    log.Errorf(ctx, "ERROR handler_weather() zip conversion problem: %s", err)
    return
    }
    // get the current weather data
    wd := getCurrent(zipint, os.Getenv("UNITS"), os.Getenv("LANG"), ctx)
    // make the template
    tmpl, err := template.New("weather").Parse(weatherTemplate)
    if err != nil {
    log.Errorf(ctx, "ERROR handler_weather() during template.New: %s", err)
    return
    }
    // execute the template
    err = tmpl.Execute(w, wd)
    if err != nil {
    log.Errorf(ctx, "ERROR handler_weather() during template.Execute: %s", err)
    return
    }
    // we're done here
    return
    }
    // because we're in appengine, there is no main()
    func init() {
    http.HandleFunc("/", handler_redirect)
    http.HandleFunc("/weather", handler_weather)
    }
    // EOF

    slack useragent checker

    are you curious if your [corporate] Slack users are logging into Slack using a Slack desktop or mobile app, or if they’re just using the Slack webpage? here’s some example code to call the slack team.accessLogs API to output which of your Slack users are not using a desktop or mobile Slack app.

    https://github.com/rickt/golang-slack-tools/blob/master/slackaccessloglooker.go

    package main
    import (
    "encoding/json"
    "fmt"
    "log"
    "net/http"
    "strings"
    "time"
    )
    type SlackAccessLog struct {
    Status bool `json:"ok"`
    Logins []SlackAccessLogEntry `json:"logins"`
    PagingData SlackAccessLogPaging `json:"paging"`
    }
    type SlackAccessLogEntry struct {
    UserID string `json:"user_id"`
    Username string `json:"username"`
    DateFirst int64 `json:"date_first"`
    DateLast int64 `json:"date_last"`
    Count int `json:"count"`
    IP string `json:"ip"`
    UserAgent string `json:"user_agent"`
    ISP string `json:"isp"`
    Country string `json:"country"`
    Region string `json:"region"`
    }
    type SlackAccessLogPaging struct {
    Count int `json:"count"`
    Total int `json:"total"`
    Page int `json:"page"`
    Pages int `json:"pages"`
    }
    // helper func to do a case-insensitive search
    func caseinsensitivecontains(a, b string) bool {
    return strings.Contains(strings.ToUpper(a), strings.ToUpper(b))
    }
    var (
    page int = 1
    pages int = 101
    token string = "REDACTED"
    slackurl string = "https://slack.com/api/team.accessLogs"
    )
    func main() {
    var sal SlackAccessLog
    // 100 pages of JSON max from the team.accessLogs Slack API
    for page := 1; page < pages; page++ {
    // build the url
    url := fmt.Sprintf("%s?token=%s&page=%d", slackurl, token, page)
    // create the request
    req, err := http.NewRequest("GET", url, nil)
    if err != nil {
    log.Fatal("error: %s", err)
    }
    // create the http client
    client := &http.Client{}
    // get the response
    response, err := client.Do(req)
    if err != nil {
    log.Fatal("error: %s", err)
    }
    defer response.Body.Close()
    // decode the JSON response into our SlackAccessLog var
    if err := json.NewDecoder(response.Body).Decode(&sal); err != nil {
    log.Println(err)
    }
    // range through this page of the response and ignore Slack App/Android/iPhone useragents
    for _, dj := range sal.Logins {
    if !(caseinsensitivecontains(dj.UserAgent, "Slack_SSB") || caseinsensitivecontains(dj.UserAgent, "Android") || caseinsensitivecontains(dj.UserAgent, "iPhone")) {
    tm := time.Unix(dj.DateLast, 0)
    fmt.Printf("%s\t%s\t%s\n", dj.Username, tm, dj.UserAgent)
    }
    }
    }
    }

    Slack Translation Bot

    stable release of Slack Translator Bot.

    http://github.com/rickt/slack-translator-bot

    what is Slack Translator Bot? the [as-is demo] code gets you get a couple of Slack /slash commands that let you translate from English to Japanese, and vice-versa.

    below screenshot shows example response to a Slack user wanting to translate “the rain in spain falls mainly on the plane” by typing:

    image

    within slack:

    image

    TL;DR/HOW-TO

    • you get a google translate api key
    • you deploy a go backend app to Google Appengine that responds to
    • a couple of custom Slack /slash commands that you create in Slack
    • you profit

    http://gist-it.appspot.com/https://github.com/rickt/slack-translator-bot/blob/master/slacktranslatorbot.go

    rickt/slack-translator-bot

    Slack Team Directory Bot

    stable release of Slack Team Directory Bot.

    http://github.com/rickt/slack-team-directory-bot

    what is Slack Team Directory Bot? you get a Slack /slash command that lets you search your Slack Team Directory quick as a flash.

    below screenshot shows example response to a Slack trying to find someone in your accounting department by typing:

    image

    within slack:

    image

    TL;DR/HOW-TO

    • you deploy a go backend app to Google Appengine that responds to…
    • a custom Slack /slash command that you create in Slack

    http://gist-it.appspot.com/http://github.com/rickt/slack-team-directory-bot/blob/master/slackteamdirectorybot.go
    rickt/slack-team-directory-bot

    UPDATED: golang code to query GA data via the Google Core Reporting API using two-legged service account Google OAuth2 authentication

    i’ve updated my example Golang code that authenticates with the Core Reporting API using service account OAuth2 (two-legged) authentication to use the newly updated golang.org/x/oauth2 library.

    my previous post & full explanation of service account pre-reqs/setup:

    http://code.rickt.org/post/142452087425/how-to-download-google-analytics-data-with-golang

    full code:

    package main
    import (
    "fmt"
    "golang.org/x/oauth2"
    "golang.org/x/oauth2/jwt"
    "google.golang.org/api/analytics/v3"
    "io/ioutil"
    "log"
    "time"
    )
    // constants
    const (
    datelayout string = "2006-01-02" // date format that Core Reporting API requires
    )
    // globals that you DON'T need to change
    var (
    enddate string = time.Now().Format(datelayout) // set end query date to today
    startdate string = time.Now().Add(time.Hour * 24 * -1).Format(datelayout) // set start query date to yesterday
    metric string = "ga:pageviews" // GA metric that we want
    tokenurl string = "https://accounts.google.com/o/oauth2/token&quot; // (json:"token_uri") Google oauth2 Token URL
    )
    // globals that you DO need to change
    // populate these with values from the JSON secretsfile obtained from the Google Cloud Console specific to your app)
    // example secretsfile JSON:
    // {
    // "web": {
    // "auth_uri": "https://accounts.google.com/o/oauth2/auth&quot;,
    // "token_uri": "https://accounts.google.com/o/oauth2/token&quot;,
    // "client_email": "[email protected]",
    // "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/[email protected]&quot;,
    // "client_id": "blahblahblahblah.apps.googleusercontent.com",
    // "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs&quot;
    // }
    // }
    var (
    // CHANGE THESE!!!
    gaServiceAcctEmail string = "[email protected]" // (json:"client_email") email address of registered application
    gaServiceAcctPEMKey string = "./analyticsdumper.pem" // full path to private key file (PEM format) of your application from Google Cloud Console
    gaTableID string = "ga:NNNNNNNN" // namespaced profile (table) ID of your analytics account/property/profile
    )
    // func: main()
    // the main function.
    func main() {
    // load up the registered applications private key
    pk, err := ioutil.ReadFile(gaServiceAcctPEMKey)
    if err != nil {
    log.Fatal("Error reading GA Service Account PEM key -", err)
    }
    // create a jwt.Config that we will subsequently use for our authenticated client/transport
    // relevant docs for all the oauth2 & json web token stuff at https://godoc.org/golang.org/x/oauth2 & https://godoc.org/golang.org/x/oauth2/jwt
    jwtc := jwt.Config{
    Email: gaServiceAcctEmail,
    PrivateKey: pk,
    Scopes: []string{analytics.AnalyticsReadonlyScope},
    TokenURL: tokenurl,
    }
    // create our authenticated http client using the jwt.Config we just created
    clt := jwtc.Client(oauth2.NoContext)
    // create a new analytics service by passing in the authenticated http client
    as, err := analytics.New(clt)
    if err != nil {
    log.Fatal("Error creating Analytics Service at analytics.New() -", err)
    }
    // create a new analytics data service by passing in the analytics service we just created
    // relevant docs for all the analytics stuff at https://godoc.org/google.golang.org/api/analytics/v3
    ads := analytics.NewDataGaService(as)
    // w00t! now we're talking to the core reporting API. the hard stuff is over, lets setup a simple query.
    // setup the query, call the Analytics API via our analytics data service's Get func with the table ID, dates & metric variables
    gasetup := ads.Get(gaTableID, startdate, enddate, metric)
    // send the query to the API, get a big fat gaData back.
    gadata, err := gasetup.Do()
    if err != nil {
    log.Fatal("API error at gasetup.Do() -", err)
    }
    // print out some nice things
    fmt.Printf("%s pageviews for %s (%s) from %s to %s.\n", gadata.Rows[0], gadata.ProfileInfo.ProfileName, gadata.ProfileInfo.WebPropertyId, startdate, enddate)
    return
    }

    have fun!

    drop me a line or say hello on twitter if any questions.

    rickt/slack-appengine

    stable release of my modified-for Google Appengine fork of https://github.com/bluele/slack. i have this working in production on several Appengine-hosted /slash commands & bots.

    http://github.com/rickt/slack-appengine

    the TL;DR on my modifications:

    https://github.com/rickt/slack-appengine

    rickt/slack-appengine