Rim weighting macro v1.1.1

It’s been a long time since I posted here. Haven’t been able to find the time recently to continue developing the software. And the change made to the rim weighting add-in isn’t a major change, just a bug fix.

There was a bug in the way the check for the existence of targets was being done. I was comparing the cells text to the cells value. In most scenarios they are the same but sometimes they’re not. In this instance the error was found when the cell contained the value 1.00. The text is ‘1.00’, the value is 1.

Anyway I’ve put a new version on the site.

Gain function of a filter

In version 1.2.1 of ssci.js I added a gain and phase shift function to the ssci.smooth.filter() function.

In this post I’ll go through what a gain function is and give an example of it. According to the ABS:

Gain functions can be used to look at the effect of a linear filter at a given frequency on the amplitude of a cycle in a given series. In other words, it shows what happens to the amplitude of a given input cycle on the application of a moving average.

The function I have defined takes a single value, the period, and returns the factor being applied by the filter to that period.

Therefore if we have a filter defined as:
var orp = ssci.smooth.filter()
.filter(ssci.smooth.henderson(13));
orp();

Then we can use the following to get the gain function:
var gout=[];
for(var i=1; i<20; i=i+0.1){
gout.push([i, orp.gain(i)]);
}

For periods from 1 to 20. Charting this gives the following:

From the chart, you can tell that cycles of between 1.2 and 6 periods are effectively removed from the data. There is a gradual increase from 6 to 14 where the gain goes above 0.9.

You can also see that if you only want to look at the gain of a filter you can do this without specifying any data, accessor functions or anything else.

More information (including the formulae) is on the ABS website. Source code is on github.






Asymmetric filters on time series data in JavaScript

I recently made a change to the ssci.smooth.filter() function in the ssci.js JavaScript library. Here is a lengthier explanation of the change.

One of the changes I’ve made is to add the ability to use asymmetric filters with this function. This is achieved via a setter function that defines the start and end index of the points to apply the filter with reference to the point being adjusted.

To give a concrete example, if we call the point being adjusted ‘n’ and the start is two points before this and the end is two points after then we would set this via:

var example = ssci.smooth.filter()
.data(data)
.filter([0.2,0.2,0.2,0.2,0.2])
.limits([-2,2]);

This is still a symmetric filter and is the default if you don’t use the limits() setter function given the filter used in the example.

However if you had quarterly data and wanted to take a moving average over the year you can now do this via the method used above. This time you will need to use:

var example = ssci.smooth.filter()
.data(data)
.filter([0.25,0.25,0.25,0.25])
.limits([-3,0]);

You can also difference the data using this method:

var example = ssci.smooth.filter()
.data(data)
.filter([-1,1])
.limits([-1,0]);

This is just taking the current point, multiplying it by 1 and subtracting the point before from it.

An explanation of the function can be found here. The source code is here.

Henderson Filters in JavaScript

I wrote in a recent post that I’d added a function to calculate Henderson filters to the ssci.js library. This post will expand on that (slightly).

To quote the Australian Bureau of Statistics:

Henderson filters were derived by Robert Henderson in 1916 for use in actuarial applications. They are trend filters, commonly used in time series analysis to smooth seasonally adjusted estimates in order to generate a trend estimate. They are used in preference to simpler moving averages because they can reproduce polynomials of up to degree 3, thereby capturing trend turning points.

The filters themselves have an odd number of terms and can be generated for sequences of 3 terms and more. To use the function contained within the JavaScript library you can just use:

ssci.smooth.henderson(term)

where term is the number of terms you want to return. These are returned as an array. So if term was 5 you would get:
[
-0.07342657342657342,
0.2937062937062937,
0.5594405594405595,
0.2937062937062937,
-0.07342657342657342
]

The equation to generate the filters was taken from here.

To actually filter a set of data this would be combined with the ssci.smooth.filter() function i.e.:

var ft = ssci.smooth.filter()
.filter(ssci.smooth.henderson(23))
.data(data);

And here’s an example using the data from the kernel smoothing post.






Rim Weighting Question

I recently had a question about rim weighting and how to set the values for the maximum iterations and the upper and lower weight caps.

I’ve reproduced my answer, though I’ve adjusted it slightly:

Maximum Iterations

  • The value to set here will depend largely on how many rims you have, how small the cells are and how close the actuals are to the targets. The only way to tell for sure is to see what difference it makes to the weights when you run the program again with one more iteration. If it makes no difference to the weights then you’re ok to leave it as it is. Non-convergence in this case will be down to either the rims having conflicting targets (i.e. one rim causes the weights to go up and another causes them to go down) or the weight cap bringing the weights back down (or up).
  • In terms of an actual value, 25 is generally ok for small number of rims (of the order of 5-20). However I’ve seen weighting schemes that required more than 200 iterations to converge. These had hundreds of interlaced rims.
  • Potentially I could add a metric to the program to check for a minimum weight change so that the program ends if all weights change by less than this figure. It would, of course, affect performance though and is not a trivial change.

Upper Weight Cap

  • A good starting point for this figure is to divide the actual proportions (or base sizes) by the targets for each cell and look at the largest. So, if for example, you had 20 percent males in the sample but the target was 45 percent and this was the biggest difference, then the biggest initial weight would be 0.45/0.2=>2.25. Given the way the algorithm works, it will not stay at that but it should be of that order. It will depend on the other rims.
  • One consequence of lowering the upper weight cap is that it will reduce the WEFF – the weighting efficiency. A higher WEFF means that you will have lower precision in your estimates i.e. it increases the standard error. However lowering the weight cap can also increase the number of iterations and also potentially lead to non-convergence.
  • I’d set a value that allows the procedure to converge and gives a reasonable WEFF. Generally a value of 5 or 6 is fine for proportional targets and a multiple of 5 or 6 above the total base size divided by the total number of panellists for base size targets (e.g. if there are 1000 panellists and a total base size of 4500, then set a value of 4.5*5=22.5).
  • A WEFF above 1.5 – 1.6 is high and is an indication of poor representation within the panel.

Lower Weight Cap

  • I’d leave this at 0 unless the WEFF needs to be lowered. A good indication of problems with the targets or with the panel is whether all the weights drop to near zero.

So, the basic answer to how to set them is that it depends on any lack of convergence and how high the WEFF goes. The above should give some indication of where to set them though.

Thanks to Bryan for the question.

Change to Kernel Smoother

Introduction

I recently changed a whole load of the functions in the ssci javascript library. One of these changes was the way that the kernel smoothing function worked.

The previous function was fairly slow and didn’t scale particularly well. In fact, the main loop of the function would suggest that it scales as O(n^2).

I therefore decided to make a change to the function. Instead of looping over every point and then calculating the weight at every other point, I’ve changed it so that:

  • It loops through every point similarly to the previous function
  • Then for point n it calculates the weight at the central point (i.e. point n)
  • It then loops through every point lower (in the array) than n and calculates the weight. If the weight of this point is lower than a certain threshold to the central point, then the loop ends.
  • It then loops through every point higher (in the array) than n and calculates the weight. If the weight of this point is lower than a certain threshold to the central point, then the loop ends.

The default setting of the threshold is 0.001 (i.e. 0.1%). The way the function operates though does mean two assumptions have been made.

  • The data has already been ordered in the x-coordinate within the array.
  • The kernel function being used must always decrease from the central point beyond this threshold.

Example

Issue #43 of D3 shape gives an example of some data that has been smoothed. It’s the observed rate of tweets for Eric Fischer’s Twitter feed over the period from January 8, 2015 through November 30, 2015. I’ve taken the data and charted it below.

Performance

I’ve put together some figures detailing the performance of the various routines on my computer. To get the times of the function and how it scaled I used the following:

var iterations = 10;
for(var j=0; j<10; j++){
var temp2 = temp1.slice(j*temp1.length/10);

console.time('Function #1');
for(var i = 0; i < iterations; i++ ){
ssci.smoothKernel("Gaussian", temp2, 2000000);
};
console.timeEnd('Function #1')

console.time('Function #2');
for(var i = 0; i < iterations; i++ ){
var data4 = ssci.smoothKernel2()
.scale(2000000)
.kernel("Gaussian")
.diff(0.001)
.data(temp2);

data4();
};
console.timeEnd('Function #2')
}

In the code above temp1 is the array holding the points of data. The code basically takes the data, chops it into progressively smaller arrays and then runs the function 10 times.

This leads to the following times (in milliseconds) per run i,e, divided by ten:

Rows #1 (ms) #2 (ms) #1/#2
144 10 10 0.94
288 38 26 1.45
432 84 42 1.99
576 149 58 2.55
720 234 76 3.08
864 336 91 3.69
1008 457 107 4.26
1152 597 124 4.82
1296 756 140 5.41
1440 949 160 5.95

Let’s plot these numbers.

The differences are anything from the same to six times faster. With more data it becomes even faster than the corresponding O(n^2) function. It would appear to scale linearly.

Of course there is also the question of what this change does to the accuracy of the smoothing. It may be a slightly odd thing to say given that no smoothing algorithm can be said to be accurate but I’m looking at the differences here.

As you can see from the graph the differences in this example are of the order of 0.004% with a maximum of 0.03%. I think we can live with that.

The new function can be found on www.surveyscience.co.uk