TechRants

Just another WordPress.com weblog

Dear TFL

leave a comment »

This is the worst advice I have ever read. Ever.

Welcome New User: We have emailed you with a temporary password. Due to security requirements this temporary password needs to be changed immediately. Passwords must now be changed periodically. Please enter a strong password: they must be 7 characters long and contain at least one number and one uppercase character. Eg: Password1 or T1ckets

Seriously, what the ?

Written by Simon Helson

September 27, 2013 at 9:44 pm

Posted in Uncategorized

Migrating from IOS to Android

leave a comment »

Well heck, of all posts I never thought I would do one of these.

I’ve recently moved from an iPhone5 to a LG (Google) Nexus 4, and wanted to note down how I moved things from one device to the other, both because it may be useful to someone else (or me!) in the future, but also to make sure I haven’t missed anything. The move was easier than it once would have been, given I already synced things with Google, however there were a few quirks that needed ironing out.

Mail:
This one was easy – I was using Gmail anyway so this seamlessly just worked

Contacts:
I thought this would be easy, I had ticked the “synchronise contacts” option on the iPhone against my Google account, both originally with the Exchange protocol and then with the new CardDav protocol. Turns out that if someone had emailled me in the past and ended up in my Google contacts, the iPhone copy of the contact wouldn’t overwrite or merge with this, so many phone number fields were empty when I looked on the new phone! Even when I manually deleted these from Google Contacts on the web, then tried disabling/re-enabling sync on the iPhone this didn’t update.
The fix in the end was to open up Contacts.app on the laptop – which was synced with iCloud and therefore my iPhone, make sure that was up to date, then enable syncing from that to Gmail. This properly updated the google contacts, and from there my new phone updated happily.

Calendars:
All on google already, easy as

Photos / Photo Stream:
I’m very precious about my mobile photos – I have pretty much every photo I’ve ever taken on a phone with a usable camera, i.e everything from my first iPhone onwards. It was important that I got these imported onto the new phone as I like to look through them occasionally, and also important that any new photos get backed up.
For the first part, syncing old photos to the new phone, I had an export from the iPhone in an Aperture library – I took these original files and copied them to the phone using the Android File Transfer utility. Wierdly enough they all turned up in a very random order, which for 2500-ish photos is just useless. Playing around with an application called jhead on the command-line showed me that a number of photos simply didn’t have any Exif information at all (which the Nexus uses the date field of for ordering), and an number were in PNG format as they had been created by on-phone editors. On the iPhone this wasn’t an issue as the photo app simply ordered things by name, virtue of owning the “please save an image” API, but on any other phone or photo manipulation app this just wouldn’t fly.
I ended up fixing this with a few lines of python that walked the list of IMGXXXX.JPG files, checked for an Exif tag, and if there wasn’t one I would create one and set the date taken field to be the same as the proceeding photo. This isn’t 100% correct, but it maintains the ordering. PNG files got converted to JPG’s and the same process run. This got my photos onto the device and browsable in proper chronological order.

In terms of backing these photos up, the Nexus by default will upload every photo you take to a private G+ album. This is not a bad feature, it’s similar to the iOS photo stream functionality, however there is a 5GB limit unless you want to resize your pictures. Being Android however I discovered that there were several apps that also implemented this feature, and there was one called “smugfolio” which would do the same thing, but to my existing SmugMug account. Given that this is where I keep all of my other photos, this was awesome! This has run flawlessly for the last week, and I’m hoping that i’ll continue to do so as I didn’t want to use yet another cloud storage provider for this.

Notes:
For many people their notes may not be hugely important, or they might use something like Evernote already. I didn’t and had about 50 notes in notes.app that I wanted to keep. I managed to find some Applescript in the end that would export the notes from the OSX version of notes.app into a text file, then mangled this with some scripting magic into the plain-text format that SimpleNote could import. A few iterations later I had all of my notes into SimpleNote with their original creation/modification timestamps intact – success! I’m now using Notational Velocity on OSX and Notational Acceleration on the Phone to talk to the SimpleNote stash, and it seems to be working well.

Books:
On iOS I was using iBooks to read eBooks, it worked, and I don’t demand a lot from eBook readers. On Android I’m bouncing between Moon+ Reader and Aldiko, I haven’t spent enough time with either yet to make a choice as to what I want to use.

What I do need to do is go through and double-check the above, and then simulate a phone failure/lost phone. In iOS I knew I had a full backup at any given time, with the Android experience this is more distributed, so keeping track of what data is where is more important. I’ll update this post once I’ve done the failure test with anything I may have missed. I still have an older iPhone with my phone backup installed on it currently so there’s a safety net for now.

Written by Simon Helson

April 14, 2013 at 4:57 pm

Posted in Uncategorized

Juniper SRX to Linux IPsec VPN configuration

with one comment

As preparation for a possible new contract I’ve been revising my IPsec knowledge, mainly around how Juniper implements IPsec, but I also hadn’t set up IPsec on Linux in several years (back in the freeswan days) so it seemed like an opportune time to catch up on this also. In my most recent contract I did a bunch of Juniper to Juniper IPsec, which is easy since all the standard proposals work, and very little knob tweaking is required to make things work. When talking to another vendor however, custom proposals are often needed for both Phase 1 and Phase 2 negotiations, and this is where things get a little more complex, both in the configuration and troubleshooting departments.

One of my favourite things about working with Juniper is that their baby devices behave almost exactly like their giant devices. This means I can break out the trusty Juniper SRX100, and know that the config I place on it will work right up to a 1400, or even into the 3400/5800 series (with a few caveats). This means that labbing up situations on a smaller scale for proof-of-concept or training work is very easy to do.

My lab situation looks like the following, with the intention being to create a secure tunnel between the 192.168.40.0/24 and 192.168.50.0/24 subnets.

Network_Diagram

 

I’ve cheated slightly as my Linux host out on the internet only has one network interface, so I’ve dropped an additional IP address onto the loopback interface. This won’t affect any of the config, but for those interested in how to do such a thing, the following commands are used:

ip addr add 192.168.50.1/24 dev lo
ip route add 192.168.50.0/24 dev lo

 

This simulates a “remote subnet” hanging off the linux host, which is what we’ll use for all our configuration and testing.

Linux Configuration
Since I wanted to improve my Juniper configuration skills, I first configured the Linux end of the tunnel – the idea being that once that configuration was set, I would need to customise the Juniper end to fit, as though the Linux host was a remote customer or similar. Configuring the Linux side was a learning experience in itself, there are two files you need to modify, one to establish kernel hooks for security policies, and one to configure the racoon daemon which is what performs all the negotiation to set up the security associations themselves.

The security policy definitions on Ubuntu are defined in /etc/ipsec-tools.conf which essentially defines “interesting” traffic and sets a tunnel requirement for that traffic:

#!/usr/sbin/setkey -f
## Flush the SAD and SPD
#
flush;
spdflush;
## Security policy definitions for our test subnets
spdadd 192.168.50.0/24 192.168.40.0/24 any -P out ipsec
           esp/tunnel/10.0.1.1-10.0.0.1/require;

spdadd 192.168.40.0/24 192.168.50.0/24 any -P in ipsec
           esp/tunnel/10.0.0.1-10.0.1.1/require;

 

Note that you need security policies in each direction, it’s very easy to typo this and end up in a world of confusion, so be careful!

Next you set up the configuration for racoon, which is signalled by the kernel (using magic!) to negotiate a compatible set of security associations for the tunnel so that encrypted and authenticated data can pass.

path pre_shared_key "/etc/racoon/psk.txt";

remote 10.0.0.1 {
        exchange_mode main;
	peers_identifier address;
        proposal {
                encryption_algorithm aes 128;
                hash_algorithm sha1;
                authentication_method pre_shared_key;
                dh_group modp1024;
        }
}

sainfo address 192.168.50.0/24 any address 192.168.40.0/24 any {
        pfs_group modp1024;
        encryption_algorithm aes 128;
        authentication_algorithm hmac_sha1;
	compression_algorithm deflate;
}

 

As a quick note, groups modp768 and modp1024 translate to diffie hellman groups 1 and 2, and could be referred to as such in the config if desired, we’ll see that at the SRX end.

You also need a psk.txt file, which is very simple:

# ipv4 addresses and keys
10.0.0.1	foobar1234

 

Once these files are set up you’re ready to go on the Linux side, I found the best way to see what was going on was to do the following:

service setkey restart
racoon -d -F -f /etc/racoon/racoon.conf

 

This starts racoon in the foreground and in debug mode, so you get to see pretty much everything as it happens on the Linux side. Trust me, this is really useful!

SRX Configuration
Configuring the SRX isn’t too difficult if you’re used to zone-based security configuration. I’ve set mine up using a “policy based” configuration. There are also options for “route based” configurations which can save on the number of security associations the device needs to maintain, however I’m not covering that here. To get this tunnel up and working we need to perform the following steps (note that “nomad” is the name of the linux host, hence the naming convention):

IKE Phase 1

  • Create a custom IKE proposal set
     

    root# show security ike proposal nomad-proposals 
    authentication-method pre-shared-keys;
    dh-group group2;
    authentication-algorithm sha1;
    encryption-algorithm aes-128-cbc;
    

     

  • Create an IKE policy that refers to the proposal set
     

    root# show security ike policy ike-nomad-policy
    mode main;
    proposals nomad-proposals;
    pre-shared-key ascii-text "$9$X9xNdsq.5/CuoJGiq.F3cyrKLxdbsoZU"; ## SECRET-DATA
    

     

  • Create an IKE gateway that refers to the IKE policy, and defines the peer address for the tunnel
     

    root# show security ike gateway ike-gate1 
    ike-policy ike-nomad-policy;
    address 10.0.1.1;
    external-interface fe-0/0/0;
    

     

    IPsec Phase 2

  • Create a custom IPsec proposal set
     

    root# show security ipsec proposal nomad-ipsec-proposal 
    protocol esp;
    authentication-algorithm hmac-sha1-96;
    encryption-algorithm aes-128-cbc;
    

     

  • Create an IPsec policy that refers to the proposal set
     

    root# show security ipsec policy nomad-policy 
    perfect-forward-secrecy {
        keys group2;
    }
    proposals nomad-ipsec-proposal;
    

     

  • Create an IPsec vpn entry that refers to the IKE gateway and IPsec policy you created
     

    root# show security ipsec vpn nomad-vpn 
    ike {
        gateway ike-gate1;
        ipsec-policy nomad-policy;
    }
    

     

    Define “interesting traffic”

  • Add the subnets you want to create the tunnel for into the address-book in the appropriate zones
  • Add a match rule to the zone-based firewall in both directions to match the interesting traffic, with an action of “permit tunnel xxxx”
     

    root# show security policies from-zone trust to-zone untrust 
    policy home-nomad-ipsec {
        match {
            source-address home-net;
            destination-address nomad-net;
            application any;
        }
        then {
            permit {
                tunnel {
                    ipsec-vpn nomad-vpn;
                    pair-policy nomad-home-ipsec;
                }
            }
        }
    }
    

     

    root# show security policies from-zone untrust to-zone trust  
    policy nomad-home-ipsec {
        match {
            source-address nomad-net;
            destination-address home-net;
            application any;
        }
        then {
            permit {
                tunnel {
                    ipsec-vpn nomad-vpn;
                    pair-policy home-nomad-ipsec;
                }
            }
        }
    }
    

     
    Note the “pair-policy” statement in both security policies. This is used to tie together the VPN tunnel so that the device knows what set of security associations to negotiate so that return traffic also has a security association established.

    At this point we should have enough configuration to test bringing up the tunnel. On the Linux host I start a ping with the source interface set to the loopback:
     

    root@nomad:/etc/racoon# ping -I 192.168.50.1 192.168.40.2
    PING 192.168.40.2 (192.168.40.2) from 192.168.50.1 : 56(84) bytes of data.
    ping: sendmsg: Invalid argument
    ping: sendmsg: Invalid argument
    64 bytes from 192.168.40.2: icmp_seq=3 ttl=64 time=153 ms
    

     
    Success! Checking out racoon’s log shows some goodness also:
     

    2012-04-15 12:39:04: INFO: ISAKMP-SA established 10.0.1.1[500]-10.0.0.1[500] spi:8ad3d84035121d8e:1fa4afc1c586f531
    …
    2012-04-15 12:39:05: INFO: IPsec-SA established: ESP/Tunnel 10.0.0.1[0]->10.0.1.1[0] spi=191818254(0xb6eea0e)
    2012-04-15 12:39:05: INFO: IPsec-SA established: ESP/Tunnel 10.0.1.1[500]->10.0.0.1[500] spi=1680274424(0x6426f3f8)
    

     
    And on the SRX:
     

    root> show security ike sa 
    Index   Remote Address  State  Initiator cookie  Responder cookie  Mode
    40      10.0.1.1    UP     8ad3d84035121d8e  1fa4afc1c586f531  Main         
    
    root> show security ipsec sa 
      Total active tunnels: 1
      ID    Gateway          Port  Algorithm       SPI      Life:sec/kb  Mon vsys
      4    10.0.1.1     500   ESP:3des/md5    b6eea0e  3471/ unlim   -   0 
    

     
    Most excellent, everything agrees that we have tunnels up, and everything is happy.
     
    This is just a basic configuration, but it proved out the concept of setting up custom proposals on the JunOS side. There are more IPsec features like dead-peer-detection that could be configured, and also setting custom timeouts for rekeying of security associations that would also be useful (from the above output we’re currently using 3600 seconds).
     
    Dead Peer Detection
    After some food, my man-page foo increased greatly, and I found the dead-peer-detection stanzas for racoon. I’ve since enabled this on both ends of the tunnel, and the tunnel is far better at recovering from a racoon restart or IKE confusion. I’m almost wondering if something like a RPM test couldn’t be used for faster failover however, since 3x10seconds is a long time to wait these days. That or perhaps two tunnels with BFD route failover – food for thought.

  • Written by Simon Helson

    April 15, 2012 at 5:04 pm

    OSX – Moving windows about

    leave a comment »

    I’ve recently moved into a new house where I’m lucky enough to have some office space. As part of this I now own an external monitor, which does vastly higher resolutions than the LCD on my Macbook Pro. This is all well and good, until I realised the annoyance of having to reposition my windows every time I plugged the laptop in or unplugged it. I keep specific windows in certain places on my screen, so having to do this twice (or more) a day was fairly irritating.

    Thankfully, there is a solution. Applescript can be used to move windows about, even ones that don’t really want to be moved, and I ran across an app called Marco Polo (link) which uses “evidence” to work out where you are, and set different settings on your laptop accordingly. Putting these together I’ve come up with a fairly awesome setup. Using “Dell monitor connected” as my evidence, I have Marco Polo running the following piece of applescript that repositions my windows, and also changes the audio output device to be the monitor (which then connects to the office stereo).

    Thanks to This post for the inspiration!


    -- this sets the position of windows for my Dell 2711 monitor

    tell application "Google Chrome"
    get bounds of first window
    set bounds of first window to {467, 67, 2183, 1116}
    end tell

    tell application "Adium"
    get bounds of first window
    set bounds of first window to {43, 963, 256, 1439}
    get bounds of fifth window
    set bounds of fifth window to {43, 22, 459, 352}
    end tell

    tell application "iTerm"
    get bounds of first window
    set bounds of first window to {402, 655, 1547, 1440}
    end tell

    tell application "Colloquy"
    get bounds of first window
    set bounds of first window to {1650, 830, 2560, 1440}
    end tell

    tell application "System Events"
    tell process "Echofon"
    set position of front window to {2200, 22}
    end tell
    end tell

    -- set the audio output to be the dell also
    tell application "System Preferences"
    activate
    set current pane to pane "com.apple.preference.sound"
    end tell
    tell application "System Preferences"
    activate
    set current pane to pane "com.apple.preference.sound"
    end tell

    tell application "System Events"
    tell application process "System Preferences"
    tell tab group 1 of window "Sound"
    click radio button "Output"
    set selected of row 2 of table 1 of scroll area 1 to true
    set deviceselected to "DELL U2711"
    end tell
    end tell
    end tell

    tell application "System Preferences" to quit

    By having the “get bounds of the nth window” call in there I can use the same script if I want to alter positions, I simply move the window to it’s new home, run the script and note the result of the “get” command to paste into the “set” command for the next run. I also have an opposite of this script which moves the windows to their optimum position for the laptop screen, resizing the browser window in the process.
    An interesting gotcha was that Echofon didn’t respond to standard coaxing, hence I had to go in using the “System Events” UI scripting method of Applescript, to use this you must turn on “support for assistive devices” under “universal access” in system preferences. The changing of the audio input uses the same method, and running the script will actually launch and then quit the prefpane momentarily.

    What next? Given Marco Polo will use all sorts of signals as “evidence” I’m planning on having it change network locations based on wifi SSID, so that I only have ipv6 enabled in “trusted” networks where I know there is an upstream firewall (i.e. at home). This is a feature I’ve been wanting for a long time since OS X’s default firewall quietly ignores v6, leaving your laptop open on the internet.

    Written by Simon Helson

    May 20, 2011 at 8:11 am

    Posted in Uncategorized

    Zenoss – Getting the “count” for an event in an event transform

    leave a comment »

    A question came up in the Zenoss irc channel yesterday regarding using the count of an event as a variable in the event transform. It turns out that it’s not quite as simple as the following:

    if evt.count > 3:
        do_something()
    

    To understand why this is the case, we need to understand how events flow into Zenoss:

    1) Collector Daemon (or similar) raises event and sends to ZenHub
    2) ZenHub receives event and processes any event transforms
    3) ZenHub files event into mySQL (presuming event hasn’t been dropped by a transform)

    At step 2, the ZenHub has received an event that in json looks a little like this:

    {'component': '',
    'device': 'host101.somewhere.com',
    'eventClass': '/Unknown',
    'ipAddress': '',
    'message': 'Event Message',
    'monitor': 'localhost',
    'severity': 3,
    'summary': 'foobaa'}
    

    At this point the ZenHub will process the event through any appropriate transforms, but it has no idea whether this is the first time this event has come in, or the tenth. It also doesn’t know the ID given to the event when it’s put in the MySQL database so evt.id is also not set. This has an advantage of speed, since it means the ZenHub doesn’t need to do a read from the MySQL database every time an event comes in – especially if it’s an event that you have transformed with code like:

    if evt:
        evt._action = "drop"
    

    Which means you don’t care about such events and don’t want them in your MySQL db anyway. What would be the point in doing a SELECT from the db for each of those?

    If you really need to get hold of the history of an event in a transform however, Zenoss provides enough hooks for you to do this. Note that this will cause a lookup in the MySQL db so has a cost attached. Some of this code was taken from the zenoss wiki here but I’ve wrapped it in a complete transform that demonstrates how the count appears after doing the query.

    import logging
    log = logging.getLogger("zen.Events")
    
    
    log.error("Checking for evt.count in a transform")
    
    if hasattr(evt, "count"):
        log.error("Count found in evt object %s" % evt.count)
    else:
        log.error("No count found in evt object")
        log.error("Trying another method to get the count from the db")
        log.error("Using example from http://community.zenoss.org/docs/DOC-2554#Change_severity_dependant_on_count")
    
        dedupfields = [
        evt.device, evt.component, evt.eventClass, evt.eventKey, evt.severity]
        if not evt.eventKey:
            dedupfields += [evt.summary]
            mydedupid = '|'.join(map(str, dedupfields))
    
        # Get the event details (including count) from the existing event that is in the mysql database
        em = dmd.Events.getEventManager()
        em.cleanCache()
        try:
            ed = em.getEventDetail(dedupid=mydedupid)
            mycount = ed.count
        except:
            mycount = 0
    
        log.error("After trying, count=%i" % mycount)
    
    

     

    This also demonstrates that you can add logging to your event transforms, which is amazingly useful. The log file turns up in $ZENHOME/logs/zenhub.log, and for this example looks something like this:

    011-03-07 22:04:26,213 ERROR zen.Events: Checking for evt.count in a transform
    2011-03-07 22:04:26,213 ERROR zen.Events: No count found in evt object
    2011-03-07 22:04:26,213 ERROR zen.Events: Trying another method to get the count from the db
    2011-03-07 22:04:26,213 ERROR zen.Events: Using example from http://community.zenoss.org/docs/DOC-2554#Change_severity_dependant_on_count
    2011-03-07 22:04:26,216 ERROR zen.Events: After trying, count=7
    

    &nbsp

    So there you have it, with some wrangling in the event transform you can get the count for an event (or any other field you choose) from the database, and then make decisions based on it. The most common is that people increase the severity based on count, but the possibilites are endless.

    Written by Simon Helson

    March 8, 2011 at 8:27 am

    Posted in Zenoss

    Zenoss – what is being monitored?

    with 2 comments

    Someone asked me today how they could tell what exactly was being collected by Zenoss, as management were asking.
    This is probably a fairly common question around the place, so I hurled together a quick script to dump exactly that from the database. It wouldn’t be hard to change the formatting from “print” to create some kind of spreadsheet/xml/json/whatever management want – and gives full detail as to what datapoints are being monitored on what components of what devices.

    The script walks each device, and for each device will pull the currently bound templates and their data points, then it does the same for any monitored components the device may have.


    Device: foo.baa.nettikconsulting.co.uk (/Server/Firewall)
    - Bound Templates:
    - Device (Devices/Server)
    - Datapoints:
    - laLoadInt5
    - memAvailReal
    - memAvailSwap
    - memBuffer
    - memCached
    - ssCpuRawIdle
    - ssCpuRawSystem
    - ssCpuRawUser
    - ssCpuRawWait
    - sysUpTime
    - Device Components:
    - os/interfaces/bnx0
    - component template: ethernetCsmacd (Devices)
    - Datapoints:
    - ifInErrors
    - ifInOctets
    - ifInUcastPackets
    - ifOperStatus
    - ifOutErrors
    - ifOutOctets
    - ifOutUcastPackets
    ***************************
    Device: http://www.nettikconsulting.co.uk (/HTTP)
    - Bound Templates:
    - Device (Devices)
    - Datapoints:
    - sysUpTime
    - HttpMonitor (Devices)
    - Datapoints:
    - size
    - time
    - Device Components:
    **************************

    Grab the code here

    Written by Simon Helson

    January 3, 2011 at 4:02 pm

    Posted in Zenoss

    Multiple Zenoss collectors behind SSL and rendering graphs

    leave a comment »

    Zenoss supports multiple collectors of performance data, either natively if you’ve bought Zenoss Enterprise, or via a community ZenPack if you’re a Zenoss Core user. In this arrangement your rrd files – the files containing your performance data – are distributed across the collectors. To bring this data back to the web interface, Zenoss provides an API interface to the rrd files called zenrender.

    zenrender runs on each collector, listening on port 8091 for requests, and returning either rendered graphs, or values from rrd files. The values from the rrd files are used by the Zenoss web interface to populate things like the current amount of disk space used on a device’s detail page. The returned graphs are rendered directly into the user’s browser.

    How does Zenoss know where zenrender lives? It uses a value called the “Render URL” in a collectors configuration. This is accessible for any device (in zendmd) as device.perfServer().renderurl. So, when Zenoss renders the html for any page with graphs for a device on it, it takes the contents of the renderurl property, and prepends it to what the graph url would normally be, for example:

    /render?foobaa&gopts=options

    becomes

    http://zenosshost:8090/render/collectorname:8091/render?foobaa&gopts=options

    As an aside, zenosshost:8090 is the zenhub process, this deals with /render/collectorname:8091 and punts the request off to the appropriate collector for you.

    This all works quite well, up to a point. If you have your Zenoss installation a web server using SSL and some mod_proxy magic to redirect the requests to the Zenoss host on port 8080 you’ll suddenly discover that all these graph urls starting with http://zenosshost:8090 either won’t work, or you’ll get security warnings as the rendered graphs aren’t served via SSL.

    This has been raised as a trac ticket here: http://dev.zenoss.org/trac/ticket/7348 but the fix isn’t due until the next release unfortunately.

    To get around this I ended up patching some of Zenoss’s code, and crafting a http configuration so that the url’s returned by Zenoss are sane even for external users, yet when it tries to fetch raw rrd values for it’s internal use it can still find the collectors. Essentially I define an “external render url” which is only used for graph url’s, and this is then proxied via the httpd to the appropriate internal URL. Three files are changed – PerformanceConf.py which defines the details for a collector, and the two template files for the GUI to enable setting this new value.

    NOTE: This is not an official patch from Zenoss, and it comes with no warranties etc. These are core Zenoss files, so upgrades etc will probably overwrite your changes. Back things up! Also, the patched files are for Zenoss version 2.5.2. The same principles apply to Zenoss v3, but the changes will almost certainly be different. I’ll try and cover off v3 in the future.

    Step 1: Update files
    I’ve created an archive of the files that are edited – download it here. Download and unpack the archive. The next step depends on whether you’re running core or enterprise as enterprise overrides the collector skin files.

    Zenoss Core
    Backup the following files:
    $ZENHOME/Products/ZenModel/PerformanceConf.py
    $ZENHOME/Products/ZenModel/skins/zenmodel/editPerformanceConf
    $ZENHOME/Products/ZenModel/skins/zenmodel/viewPerformanceConfOverview.pt

    Copy PerformanceConf.py from the archive to $ZENHOME/Products/ZenModel
    Copy editPerformanceConf and viewPerformanceConfOverview.pt from the core/ directory in the archive to $ZENHOME/Products/ZenModel/skins/zenmodel/

    Zenoss Enterprise
    Backup the following files:
    $ZENHOME/Products/ZenModel/PerformanceConf.py
    $ZENHOME/ZenPacks/ZenPacks.zenoss.DistributedCollector-2.1.2-py2.4.egg/ZenPacks/zenoss/DistributedCollector/skins/ZenPacks.zenoss.DistributedCollector/editPerformanceConf
    $ZENHOME/ZenPacks/ZenPacks.zenoss.DistributedCollector-2.1.2-py2.4.egg/ZenPacks/zenoss/DistributedCollector/skins/ZenPacks.zenoss.DistributedCollector/viewPerformanceConfOverview.pt

    Copy PerformanceConf.py to $ZENHOME/Products/ZenModel
    Copy editPerformanceConf and viewPerformanceConfOverview.pt from the enterprise/ directory in the archive to $ZENHOME/ZenPacks/ZenPacks.zenoss.DistributedCollector-2.1.2-py2.4.egg/ZenPacks/zenoss/DistributedCollector/skins/ZenPacks.zenoss.DistributedCollector/

    Restart Zenoss

    Step 2: Configure Apache to proxy the requests
    Lets imagine for this configuration that we have a master Zenoss host, and two external collectors (collector1.internal and collector2.internal). Apache runs on the Zenoss host, listening on port 443 to serve the Zenoss install via SSL. Users access your Zenoss install at https://zenoss.company.com/

    Firstly we create proxy passes for the render url’s we will set, collector names match the collector names in Zenoss.

    ProxyPass /render/collector1 http://collector1.internal:8091/
    ProxyPass /render/collector2 http://collector2.internal:8091/

    Finally we have the base ProxyPass to redirect all other requests

    ProxyPass / http://localhost:8080/VirtualHostBase/https/zenoss.company.com:443/

    Step 3: Configure Zenoss
    Now we need to tell Zenoss to serve up url’s that apache can proxy for us. To do this we modify both render url’s for each collector. Using the example above:

    localhost Collector:
    render url: /zport/RenderServer
    external render url: /zport/RenderServer

    Collector 1:
    render url: http://collector1.internal:8091
    external render url: /render/collector1

    Collector 2:
    render url: http://collector2.internal:8091
    external render url: /render/collector2

    All going well, you should be able to view the graphs for any device on any collector, without any security warnings. If you right click on the graph link (or view the page source) you will be able to confirm whether your new url’s are in use, and this can also be used to help debugging – along with the apache logs.

    Written by Simon Helson

    December 22, 2010 at 5:12 pm

    Posted in Uncategorized, Zenoss