Saturday, August 29, 2009

iPhone: Voice Mill

Yesterday I've played a bit with AVAudioRecorder, and this is a very small and funny example.
The main Idea is to create something like a wind mill that works with voice instead of wind.

The code below shows you how to record something. Then with the updateMeters() and peakPowerForChannel() you can extract the audio "noise".


NSURL *url = [NSURL fileURLWithPath:@"/dev/null"];
NSDictionary *settings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithFloat:44100.0], AVSampleRateKey,
[NSNumber numberWithInt:kAudioFormatAppleLossless], AVFormatIDKey,
[NSNumber numberWithInt:1], AVNumberOfChannelsKey,
[NSNumber numberWithInt:AVAudioQualityLow], AVEncoderAudioQualityKey,
nil];

NSError *error;
recorder = [[AVAudioRecorder alloc] initWithURL:url 
settings:settings error:&error];
if (recorder) {
[recorder prepareToRecord];
recorder.meteringEnabled = YES;
[recorder record];
...
}


The SWF Video is available here Voice Mill Video.
The Source Code is available here Cocoa Voice Mill Source Code.

Tuesday, August 25, 2009

Block Cache Algorithm

I need to replace my old filesystem cache algorithm with something more new and efficient. The old one is based on LRU/LFU algorithm. There's a queue of cached blocks and an Hashtable to speedup block lookup.


struct blkcache_buf {
struct blkcache_buf * next; /* Next Queue Item */
struct blkcache_buf * prev; /* Prev Queue Item */
struct blkcache_buf * hash; /* Next Item with the same hash */

xuint16_t count; /* Retain count */
xxx_block_t block; /* Cached Block */
};

typedef struct {
struct blkcache_buf ** buf_hash; /* Bufs Hashtable */
xuint16_t buf_hash_size; /* Bufs Hashtable Size */
xuint16_t buf_used; /* Bufs in use */

struct blkcache_buf * head; /* Head of the Bufs Queue */
struct blkcache_buf * tail; /* Tail of the Bufs Queue */

xxx_device_t * device; /* Block Device used for I/O */
} xxx_blkcache_t;


Above, you can see the cache data structure and below the core of the cache Algorithm.


#define BUFHASH(cache, blocknr) ((blocknr) % (cache)->buf_hash_size)

xxx_block_t *xxx_blkcache_read (xxx_blkcache_t *cache,
xxx_blkptr_t blocknr)
{
struct blkcache_buf *buf;
xuint16_t hash_index;

/* Scan the hash chain for block */
hash_index = BUFHASH(cache, blocknr);
if ((buf = __blkcache_find(cache, blocknr, hash_index)) != NULL) {
buf->count++;

/* Move Buf far from head */
__blkcache_buf_shift(cache, buf);

return(&(buf->block));
}

/* Cache is Full, Remove one Item */
if ((cache->buf_used + 1) > cache->buf_hash_size) {
/* All buffers are in use */
if (cache->head->count > 0)
return(NULL);

/* Remove Least-Frequently Used */
buf = __blkcache_remove_lfu(cache);
cache->buf_used--;
}

/* Desidered block is not on available chain, Read It! */
if ((buf = __blkcache_buf_alloc(cache, buf, blocknr)) == NULL)
return(NULL);

/* Add One Use, Block cannot be removed */
buf->count++;

/* Go get the requested block unless searching or prefetching. */
__blkcache_readwrite(cache, buf, RFALSE);

/* Update Cache Hash */
cache->buf_used++;
buf->hash = cache->buf_hash[hash_index];
cache->buf_hash[hash_index] = buf;

/* Update Cache Queue */
if (cache->head == NULL) {
cache->head = cache->tail = buf;
} else {
buf->prev = cache->tail;
cache->tail->next = buf;
cache->tail = buf;
}

return(&(buf->block));
}


You can download a demo implementation here: lru-block-cache.c, but I'm waiting some ideas or suggestions to improve (or change radically) the Cache Algorithm. Thanks in advance!

Tuesday, August 11, 2009

[TIP] Generic Binary Format

I Love use Binary formats, instead of XML, and JSON. Here is my Generic Binary Format for data transmissions or serializations. Data is composed by three blocks. The first one 1byte that describe all information about the object, like "is a single Int object" or "is a list", then tells you the second block length. The second block contains the size of the third block (The Data-Block) or the Number of Items in List.

Monday, August 10, 2009

[TIP] Generic Key Comparer

I'm back to code on my "Cloud" FileSystem, and distributed tools. And here is a little tip.

When you're working on Data, probably you store it as a Key-Value Pair on a BTree or something similar, and maybe this key is an aggregation of information... Maybe you've one bit of flag, N bytes of ParentKey, and others...

Now, the problem is... How can a "foreign" server sort correctly my keys? The solution is to send to the server the information on how to sort.. or a method to do it... but today, I'm focusing on the first one.



The code below show an implementation in Python of the Generic Key Comparer. At the end of the source code you can find an usage example. The Full Source Code is available here. Generic Key Comparer Source Code.



def __indexOfOne(data, tokens, offset):
  for i in xrange(offset, len(data)):
    if data[i] in tokens:
      return i
  return -1

def rawComparer(data1, data2, comparer):
  typeIds = [ 's', 'u', 'c', 'i', 'f', 'x' ]
  pyBinMap = { 
    ('u', 1): 'B', ('u', 2): 'H', ('u', 4):'L', ('u', 8):'Q',
    ('i', 1): 'b', ('i', 2): 'h', ('i', 4):'l', ('i', 8):'q',
    ('f', 4): 'f', ('f', 8): 'd'
  }

  p = i = 0
  while i < len(comparer):
  nextIdx = __indexOfOne(comparer, typeIds, i + 1)
  if (nextIdx < 0): nextIdx = len(comparer)

    format = None
    length = 1 if (i + 1) == nextIdx else int(comparer[i+1:nextIdx])      
    if comparer[i] == 's':
      format = str(length) + 's'
    elif comparer[i] == 'c':
      format = 'c'
    elif (comparer[i], length) in pyBinMap:
      format = pyBinMap[(comparer[i], length)]         

    if format != None:
      d1 = struct.unpack(format, data1[p:p+length])[0]
      d2 = struct.unpack(format, data2[p:p+length])[0]

     if d1 < d2:
       return -1
     elif d1 > d2:
       return 1 
     p += length
     i = nextIdx
     return 0

# Usage Example
if __name__ == '__main__':
  data1 = struct.pack('4sLch', 'test', 10, 'A', -3)
  data2 = struct.pack('4sLch', 'test', 10, 'A', -3)
  print 'Equal (test 10 A -3)', rawComparer(data1, data2, 's4u4ci2')

  data1 = struct.pack('4sLch', 'test', 10, 'A', 1)
  data2 = struct.pack('4sLch', 'test', 10, 'A', 0)
  print '(test 10 A 1) > (test 10 A 0)', rawComparer(data1, data2, 's4u4ci2')

  data1 = struct.pack('4sLch', 'test', 10, 'A', 0)
  data2 = struct.pack('4sLch', 'test', 10, 'A', 1)
  print '(test 10 A 0) < (test 10 A 1)', rawComparer(data1, data2, 's4u4ci2')

Thursday, August 6, 2009

Unified Notification Service: Avoid The Wheel Reinvention

Every programmer loves to reinvent the wheel, and reinventing the wheel is still my primary hobby. Sometimes you need to reimplement a Network protocol to use with your favorite language/library, sometimes is only for fun, but if you're in the Business World maybe is "better" (faster) to use one of the thousand existing libraries.

In the most cases you need to reimplement a Protocol to embed it in your application, and sometimes you have to reimplement two/three protocols that does the same job like IM Protocols (XMPP, AIM, Yahoo...).

A better solution, that avoid you to reinvent the wheel is to use an existent library to handle the protocol(s) that you need, and build an Abstract Interface, with your data format, that allows you to use a generic way to communicate between various provider. Below you can find a graphical example of what I mean.

You can have many providers, written in different languages. These providers talk with the Notification Service providing an abstract interface for the Apps. In this way, the end App has just to say "Write a Mail To X", "Download Todays Mail", "Send an IM to X"... and you can intercept notification to displays as a popup on your desktop... or something similar.

This solution will be used in MokoTouch Project, to provide Core Services to the Apps. For more information send me a mail.

Tuesday, August 4, 2009

A Bit of Distributed Computation...

In the last months I've worked on various 2D rendering projects, that requires lots of row power to be executed in smallest time as possible.

The rendering result is an aggregation of components (or better, Group of Components) that can be rendered independently of each other in a  process because each components has its own input data and until the aggregation process starts there're no dependencies between components. In a few words, foreach input data I've to call a Render method that returns the "computed" data, that at the end will be aggregated with all the computed values.

So, how speed-up the rendering? The standard answer is using Threads, to take advantage of multi-core system. But I need lots row power and lots of core to be fast enought. Another solution that doesn't require to spent a lot of money for a faster computer it to distribute the computation across different machines. And this is my attempt...

The Master receive an array of elements as input and splits up it according to the number of the available Nodes (Machines), and foreach node assigns its sub-array and sends an "informative message" to the node.

The Node waits for an "informative message" and when it has received it, starts its computation. Foreach Data Item that fetch runs the Computation and sends back the result.

The Master decompose data into equal-size partitions, so each node has  an equal-size queue to process, but if someone finishes its job and there're more data to process (in someone else queue) The Master dequeue couple of items from the slower node queue and add to the queue of the one that has finished its job. In this way you've done your computation faster then ever, and if one machine crashes (slow one case) its job will be taken by someone else that has finished its own.

And this is just a bit of General Theory, but the implementation is really simple. Maybe I'll try to reimplement something more generic in the near future (October, November) when I'll have a bit more of free time. If you've any question send me a mail!