[livesupport-dev] Performance testing
  • ad: finding out where the bottleneck is ...

    Because some problems with running scheduler, I've done performance
    testing on storageServer XMLRPC interface only.

    I've done several testing cases:
    1) running PHP XMLRPC client in bash cycle - it repeatedly invokes PHP,
    runs client and sends XMLRPC requests
    (locstor.accessRawAudioData and locstor.releaseRawAudioData)
    2) running XMLRPC requests in cycle in one PHP script
    (locstor.accessRawAudioData and locstor.releaseRawAudioData)
    3) reference case - only system.methodHelp XMLRPC request in cycle in
    one PHP script

    Time have been averaged from 100runs cycle.
    Typical results from 'time' command follows.

    case 1: average = 0.421s
    real 0m42.079s
    user 0m17.532s
    sys 0m2.242s

    case 2: average = 0.225s
    real 0m22.507s
    user 0m0.446s
    sys 0m0.064s

    case 3: average = 0.094s
    real 0m9.418s
    user 0m0.195s
    sys 0m0.036s


    processor: Celeron M 1.30GHz
    both client and server on the same machine (and db-server too)

    =>
    The first case was wrong approach - significant time was spent on
    client side cycle.
    In the second case there may be still some influence from client side,
    but not significant IMO.
    There seems to be roughly one half of time spent in XMLRPC routines and
    the second half in storageServers' scripts.
    In my opinion the response speed is good for interpreted language with
    R/W db access (on my comp).

    Would it help in delay problem?

    Tomas Hlava
    th@red2head.com

    ------------------------------------------
    Posted to Phorum via PhorumMail
  • 10 Comments sorted by
  • Tomas Hlava wrote:
    > ad: finding out where the bottleneck is ...
    >
    > Because some problems with running scheduler, I've done performance
    > testing on storageServer XMLRPC interface only.

    that's good. the problem came out with the graphical user interface
    anyway, not with the scheduler...

    > I've done several testing cases:
    > 1) running PHP XMLRPC client in bash cycle - it repeatedly invokes PHP,
    > runs client and sends XMLRPC requests
    > (locstor.accessRawAudioData and locstor.releaseRawAudioData)

    was this done for playlists, or audio files? to simulate the use case in
    question, one playlist and several audio file opens should be tested.

    but you can also trace what XML-RPC calls are made by invoking the GUI,
    uploading some files and then trying to play it through live mode.

    BTW, Doug, wouldn't you want to open a bug report on the issue,
    describing the test case in question? it seems there's a comminication
    problem with the e-mail lists, as we have to replicate the same
    information over and over...

    > 2) running XMLRPC requests in cycle in one PHP script
    > (locstor.accessRawAudioData and locstor.releaseRawAudioData)
    > 3) reference case - only system.methodHelp XMLRPC request in cycle in
    > one PHP script
    >
    > Time have been averaged from 100runs cycle.
    > Typical results from 'time' command follows.
    >
    > case 1: average = 0.421s

    so this takes 0.4 secs for one opening of an audio file (or playlist?).
    for a playlist with 4 audio files inside, that would be .421 * 5 = 2.105
    seconds?

    > user 0m17.532s
    > sys 0m2.242s
    >
    > case 2: average = 0.225s

    this means for the same test case: .225 * 5 = 1.125 seconds?

    > The first case was wrong approach - significant time was spent on
    > client side cycle.
    > In the second case there may be still some influence from client side,
    > but not significant IMO.

    yes, probably this is better.

    in your test cycle, are the files only opened, or actually copied over
    as well? AFAIK the GUI opens the file, then makes copies of each (Ferenc
    can clarify this)

    > There seems to be roughly one half of time spent in XMLRPC routines and
    > the second half in storageServers' scripts.
    > In my opinion the response speed is good for interpreted language with
    > R/W db access (on my comp).
    >
    > Would it help in delay problem?

    yes, a delay of over 1 second is a problem..


    but this is what we have, it seems. what we should think about how to
    access all resources as soon as possible. maybe we could call the
    accessRawAudioData method for each file as soon as they 'appear' in the
    GUI. but this would result in a huge number of open files. with that be
    a performance issue?


    Akos

    ------------------------------------------
    Posted to Phorum via PhorumMail
  • On Tue, 30 Aug 2005 09:54:43 +0200
  • For playlist opening from my point of view:
    there should be good way to open only first audio file at the playlist
    open (and prepare next one ...), because playlist could have ~100
    audioclips (e.g.smart playlists) and then opening all the files at once
    could be really serious problem with any optimization and caching.

    Tomas Hlava
    th@red2head.com

    ------------------------------------------
    Posted to Phorum via PhorumMail
  • Tomas Hlava wrote:
    > *** I've used only access and release - audio files are then accessible
    > and seekable in local filesystem (in the worst case mounted over LAN).

    but AFAIK the playlist has to be converted at opening time, from the
    stored format to SMIL. but this is done on the client side...

    > *** we should define some 'file closing policy' Wink
    > IMO there's not problem to have ~400 files opened, but we should
    > close unused files.
    > (Some "garbage collector" for closing resources "opened last year"
    > is my task yet)
    > If we have ~10.000 files opened - there could be access-directory
    > handling overhead depending on used filesystem.
    > (access-dir = dir with access symlinks - there is possible to do
    > optimization: divide it to dir levels similarly as with stor dir)

    I don't think that more than several hundred files would be openned at
    the same time...


    Akos

    ------------------------------------------
    Posted to Phorum via PhorumMail
  • Tomas Hlava wrote:
    > For playlist opening from my point of view:
    > there should be good way to open only first audio file at the playlist
    > open (and prepare next one ...), because playlist could have ~100
    > audioclips (e.g.smart playlists) and then opening all the files at once
    > could be really serious problem with any optimization and caching.

    but this is a problem with seeking - if you'd want to seek into a
    playlist unto a not-yet-openned file, there would be a significant
    delay. so I'm not sure if this is a good strategy.

    maybe we could have a single XML-RPC call to open a playlist and all
    files inside the playlist. as there is a significant overhead with the
    XML-RPC call as well...

    Tomas, would you look into this issue? also, if you could look at other
    optimization options, that would be very nice...


    Akos

    ------------------------------------------
    Posted to Phorum via PhorumMail
  • On Tue, 30 Aug 2005 11:53:09 +0200
  • Tomas Hlava wrote:
    > *** I'm afraid that there is better to have delay on seeking than on the
    > all playlist opens - is there high probability of seeking into playlist?

    thinking of this approach with gstreamer, it doesn't seem plausible.
    it's not really possible to add gstreamer elements (for the newly
    openned audio files) while it is already playing. it would cause hiccups
    in playing Sad

    > *** It looks possible, but will it solve the problem?
    > I'm not sure - if we open all files in the playlist at once, it will
    > take too long time with any optimization IMO.

    you wrote earlier:

    > There seems to be roughly one half of time spent in XMLRPC routines and
    > the second half in storageServers' scripts.

    so for 5 * .225 = 1.125, it would be .225 + 4 * (.225 / 2) = .675
    seconds, a bit more than half the time. having a performance gain of
    about 45% is a huge increase.


    Akos

    ------------------------------------------
    Posted to Phorum via PhorumMail
  • On Tue, 30 Aug 2005 12:54:44 +0200
  • > On Tue, 30 Aug 2005 12:54:44 +0200