Wednesday, March 16, 2011

Open source or free financial analysis programs/libraries

I'm looking for something containing similar functions to Matlab’s financial and financial derivatives toolbox but don’t have the cash to spend on matlab. I would appreciate any info on free or open source libraries or programs that will let me easily calculate interest rates, risk etc.

From stackoverflow
  • How about the Octave financial functions?

    http://www.gnu.org/software/octave/doc/interpreter/Financial-Functions.html#Financial-Functions

    I'm not familiar with the Matlab toolbox, so you'll have to judge for yourself.

    GNU Octave is a high-level language, primarily intended for numerical computations. It provides a convenient command line interface for solving linear and nonlinear problems numerically, and for performing other numerical experiments using a language that is mostly compatible with Matlab. It may also be used as a batch-oriented language.

  • How about JQuantLib or QuantLib?

    Jared : Octave doesn't do everything I want but quandlib should. I'm looking at using swig to allow me to call Quandlib functions from octave so I get the ability to deal with large datasets and run calculations with out needing to write a custom program to use Quantlib.
  • Exactly what functions do you need? How advanced? You have some financial functions in .Net

    Im sure it doesnt cover everything, but calulating interests and some other are no problem:

    http://msdn.microsoft.com/en-us/library/daksysx3(VS.80).aspx

    Calculate depreciation. DDB, SLN, SYD

    Calculate future value. FV

    Calculate interest rate. Rate

    Calculate internal rate of return. IRR, MIRR

    Calculate number of periods. NPer

    Calculate payments. IPmt, Pmt, PPmt

    Calculate present value. NPV, PV

When a Qt widget gets focus

I'm a new in Qt Designer 4.4.1 Open Source Edition. I used to program in Windows Borland C++ Builder and I've switched to Linux.

I do not know how to gain the control when a widget (a LineEdit in the specific case) gains the focus, no mater if by tab or by clicking or by any other medium. My Focus Policy to the widget is "strongFocus", so it is enabled to receive.

In Borland Builder, each object has a table with all possible events. For Edit field there is among the events one called "OnEnter" who signalizes the focus entering that object (similarly there is "OnExit")

Qt has something similar?
Can someone help me. I'll be gratefull. Luis

From stackoverflow
  • There is a "focusChanged" signal sent when the focus changes.
    It has two arguments, he widget losing focus and the one gaining focus.

  • I'd have to play with it, but just looking at the QT Documentation, there is a "focusInEvent". This is an event handler.

    Here's how you find information about.... Open up "QT Assistant". Go to the Index. Put in a "QLineEdit". There is a really useful link called "List of all members, including inherited members" on all the Widget pages. This list is great, because it even has the inherited stuff.

    I did a quick search for "Focus" and found all the stuff related to focus for this Widget.

  • Dear Bob

    Many thanks for your attention to my problem. Before putting the question I've done the reading yet, but I'tried to code, in the Form120's constructor like that:

    " connect(lineEdit12,SIGNAL(focusInEvent(lineEdit12)), this, SLOT(myroutine(void)));"

    g++ compiled well but during processing I received such messages:

    Object::connect: No such signal QLineEdit::focusInEvent(lineEdit12)

    Object::connect: (sender name: 'lineEdit12')

    Object::connect: (receiver name: 'Form120')

    please I ask your help. thanks Luis

    Max Howell : This isn't a forum, reply as a comment.
  • QWidget::setFocus() is slot, not signal. You can check if QLineEdit is in focus with focus property. QLineEdit emits signals when text is changed or edited, see documentation.

  • Qt Designer isn't designed for this level of WYSIWYG programming.

    Do it in C++:

    class LineEdit : public QLineEdit
    {
        virtual void focusInEvent( QFocusEvent* )
        {}
    };
    
  • You have hit on of the weird splits in QT, if you look at the documentation focusInEvent is not a slot it is a protected function, you can override it if you are implementing a subclass of your widget. If you you just want to catch the event coming into your widget you can use QObject::installEventFilter it let's you catch any kind of events.

    For some odd reason the developers of Trolltech decided to propagate UI events via two avenues, signals/slots and QEvent

    Max Howell : You can override a virtual base function. That is why events are virtual functions. Trolltech use signals instead when inheritance for that functionality is not likely to be useful.
  • The simplest way is to connect a slot to the QApplication::focusChanged signal.

  • You can add en event filter.
    This is an example of an application written with QtCreator. This form has a QComboBox named combobox.

    
    MainWindow::MainWindow(QWidget *parent) :
    QMainWindow(parent),ui(new Ui::MainWindow)
    {
        ui->setupUi(this);
        ui->comboBox->installEventFilter(this);
        .
        .
        .
    }
    
    bool MainWindow::eventFilter(QObject *object, QEvent *event)
    {
        if (event->type() == QEvent::FocusOut)
        {
            if (object == ui->comboBox)
            {
                qWarning(object->objectName().toLatin1().data());
            }
        }
        return false;
    }
    
    lostriebo : +1 Exactly what I was looking for, thank you!
  • Karl you are The Man!

Appropriate use of DL and DD?

I had some site templates designed for me recently. I got the final HTML code, which validates, but the structure of the document is laid out using DL-DD pairs:

<dl>
  <dd class="some-class">
    Some text.
  </dd>
</dl>

I'm not especially familiar with those tags as I've never used them much, but they don't seem intended for document structure. Am I right? Why would a designer do this?

From stackoverflow
  • From WC3: http://www.w3.org/TR/html401/struct/lists.html#h-10.3

    Definition lists vary only slightly from other types of lists in that list items consist of two parts: a term and a description. The term is given by the DT element and is restricted to inline content. The description is given with a DD element that contains block-level content.

    So given your example, these types of structures should be used for a specific type of content and not structuring of data that does not fit within this context.

    Ian : My main question is, is there any valid reason a designer would use these tags for document strucuture, or is this just WTF code?
    Gavin Miller : Ahh, I see; yes, this is a WTF piece of code.
  • You're right in that it shouldn't be used like that. The only reason I can think of for the designer using them is that the <dd> tag is indented in most browsers. If they're overriding the padding/margins on them, then your guess is as good as mine.

  • A DL tag is about the same as a UL tag, it starts an unordered list.

    The difference being that there basically is no bullet in a DL/DD couple.

    Most of the time, though, it's used for it's real use, that is, a Definition List, and is used with DT and DD, which are Definition Term and Definition Description which would look like :

    <DL>
      <DT>CSS</DT>
      <DD>Cascading Style Sheet</DD>
    </DL>
    

    which will, by default, indent the term a bit, and indent it's definition a bit more.

Is "Commit" necessary when updating Oracle from asp.net?

I have this code

    protected void btnUpdateAddress_Click(object sender, EventArgs e)
    {
        sdsAddressComparison.Update();
    }

that I'm using to update an oracle database. When I run the update sql code in SQL Navigator I have to type "Commit" or hit the commit button.

Do I have to code in a "Commit" somewhere in ASP.NET? and if so how and where do i do it?

From stackoverflow
  • Normally, No you do not need the commit.

    However you can write the code to use a transaction, and at the completion of the transaction you can call commit.

    E.G. (Regular):

    try {
            // Open connection
            dbConn.Open();
            //DB Update Code Here
        }
        catch (Exception ex) {
            throw;
        }
        finally {
            // Close database connection
            dbConn.Close();
        }
    

    E.G. (As Transaction):

    try {
            // Open connection & begin transaction
            dbConn.Open();
            dbTran = dbConn.BeginTransaction();
            
            //DB Update Code Here
            // Commit transaction
            dbTran.Commit();
        }
        catch (Exception ex) {
            // Rollback transaction
            dbTran.Rollback();
            throw;
        }
        finally {
            // Close database connection
            dbConn.Close();
        }
    
  • Just for clarification....I'm not talking about SQL Server transactions ....I'm talking about Oracle updates which usually require a commit command when I use either SQL navigator or SQL Plus

    The reason i'm posting this is because I can update this data in SQL Navigator but it doesn't update when I use ASP.NET.

    Brian Schmitt : Yep, you can do transactions in Oracle too.
    BQ : A transaction is a general concept, not a feature specific to SQL Server (or any other RDBMS). As is any DML (select/insert/update/delete) statement.
  • By default, your ASP.Net code, and most other client API's for databases (ODBC, OLE DB, JDBC, etc), run in auto-commit mode. That is, any time a statement is executed successfully, the result is committed. If you are running in that sort of a default mode, there is no need to explicitly commit your update.

    On the other hand, there is generally a great deal to be said for putting your updates in explicit transactions-- if you ever have to issue multiple updates in order to make one logical business change, the default auto-commit mode is a very poor one. The classic example here is that if you update account A to withdraw $50 and then update account B to deposit $50 and you end up having two different transactions because of auto-commit being enabled, it is possible that the first transaction would succeed while the other transaction fails and the system loses track of $50.

    So you generally want to write code similar to what Brian has demonstrated where you use transactions and issue the explicit commit. But by default, you don't have to and your updates will auto-commit.

    David Aldridge : +1: Yes, and I think that autocommit is a Very Bad Thing. If the associated problems are not already obvious, then let me add that it also harms performance on Oracle.
  • How do I check if autocommit is turned on?

    BQ : That depends on the application you're using to connect to the database. Look in SQL Navigator's options dialog.
  • The behavior you're seeing in SQL Navigator is probably determined by an options setting.

    I haven't used SQL Navigator, but I do use TOAD which is also by Quest Software. In the options dialog there, it's under View->Toad Options..., then the Oracle->Transactions node.

    There's the following relevant settings:

    • [ ] Commit after every statement (checkbox)

    • When closing connections: (radio selection)

      • ( ) Commit
      • ( ) Rollback
      • ( ) Prompt for commit/rollback when changes detected, or detection is not possible due to lack of privileges on dmbs_transaction.

    So you could change the setting so you don't need to hit the commit button (or type "commit"), but it's generally a bad practice since a commit is something that you should explicitly be doing (or explicitly rolling back).

  • Autocommit is also available in SQL*Plus.

    SET AUTOCOMMIT ON
    SET AUTOCOMMIT OFF
    

    or

    SET AUTOCOMMIT 100
    

    Use SHOW AUTOCOMMIT to see the current setting.

    But ... I hate this setting. You commit at the end of a meaningful unit of work, not part way through.

Accessing host machine IIS from a guest OS in VMWare

How can I access a site configured in IIS 7 on the host machine from a guest OS in VMWare (Fedora 10). I have configured the VM to use "NAT"

From stackoverflow
  • Depends on your network configuration of vmware product you are using (player, server, workstation). If it is set for a bridged mode, then you can do it as any other machine - by host machine's IP. If it is a "host only" or NAT mode - check what is the gateway IP for the guest (/sbin/route), and try using it:

    # /sbin/route
    Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
    
    default         10.x.y.z        0.0.0.0         UG    0      0        0 eth0
    

    In this case, 10.x.y.z is the ip of the host machine.

    Nick : I am sorry, I am using NAT ..
    Nick : And i am using VMWare Workstation
    Sunny : So, there is a "private" network between the host and the guest. Make sure your IIS site binds to the "private" ip as well, not only to the "real" IP and localhost. Then, as I said, check in the guest what is the gateway IP, it is the same as the host "private" IP, so you can use it.
  • Pretty easy...

    Step 1 Configure IIS on the host OS to include a binding, if you would like to use a "url". for example mySite.com

    Step 2. In the Guest open up the hosts file c:\windows\system32\drivers\etc\hosts.

    Add the entry

    [hostip address] [host iis bining url]

    Example

    192.168.0.1 mySite.com

    restart your browser in the guest, and you should be good to go.

    Jan Aagaard : Why not simply type the IP number of the host in the address bar in the browser on the guest?

Measuring trust for users of a service.

Lets say you have a program that allows access to some sort of media. This media can be damaged. It is only possible for the users to know if the media is damaged after they use the service and receive the media. So to make your users happy, you want your program to give the users the ability to turn the media back in for a refund. However malicious users will obviously try to game this system by asking for refund on perfectly good media.

The question is what would be a good algorithm to decide whether or not to trust a given user. How should users build trust? How should trust be spent?

I imagine there must be some academic research on how to construct 'trust' values for known users and so on. Anyone have links to papers or some sort of research? I would even be happy to read random thoughts on the problem but I am more interested in actual papers.

From stackoverflow
  • If you are referring to physical media, the first analogy is buying a CD, DVD or video game from a store. If you return it, they won't give you a refund, but they will give you a non-defective copy if that was your problem.

    There's no reason for a user to suddenly decide not to have the media if the first copy was bad if they can easily get a second, non-defective copy for free.

  • To clarify....

    There are no humans involved in this process. Users approach the service, use it, attempt to consume the media then a chance of a problem. If there is a problem then the users want to get a refund. The question is how to build data over time as to the trustworthiness of a given user.

    The bad user scenario would go something like: User consumes media successfully, lies to the service, asks for refund.

    Marcin : If no humans are involved, what are the users?
    Tim : you need to provide more information - there may be limiting aspects to this. The user may need to provide more information for a refund so you can track abuse. Alternatively/also you can issue a credit rather than a refund.
  • This is very programming related, it would be describing an algorithm.

    Although I've never seen a paper on the scenario you are discussing, it seems like it should be pretty straight-forward. I think I would track two axis--By media and by user in a pretty simple, liner fashion.

    First of all, at some point the sales/returns ratio should be able to indicate that you need to pull the item, that'd be my first line of defense!

    If a user asks for a refund, I'd check the sales/returns ratio, if it's not Very Low, there is a good chance the media is bad. In this case I'd allow the refund (and increment the user's trust)

    If the ratio is very low AND the total number of sales is low, I'd check the trust stat and if it's high, allow the return and adjust stats (but I wouldn't increment the trust stat except in the above case--because in that case it correlates with other users)

    If the number of sales is high and the number of returns is low AND the user has a low "trust" stat, Then I'd deny it.

    edit:

    Also, I'd track all refunds separately, exactly who returned what and not just a simple counter as my post implies. In that way, if your algorithm is insufficient, you could implement a new algorithm that could recalculate your existing data on the fly.

    It could also be used to evaluate patterns of abuse--in other words, if you do identify a pattern someone has been using to scam the system, you could create a new pattern detector and execute it to find other accounts that have been using the same pattern, then show them goatse or something next time they make a request.

Detecting Vista IE7 Protected Mode with Javascript

I'd like to be able to detect Vista IE7 Protected Mode within a page using javascript, preferably. My thinking is to perform an action that would violate protected mode, thus exposing it. The goal is to give appropriate site help messaging to IE7 Vista users.

From stackoverflow
  • What are you trying to accomplish that is substantially different for protected users? I've seen some window popup issues, but otherwise, clean JavaScript tends to be less affected. If you're finding that a block of code won't execute, why not do a check after attempted execution to see if the document is the state you expect, and alert if not.

    If it's it using ActiveX, MS has a Protected Mode API: http://msdn.microsoft.com/en-us/library/ms537316(VS.85).aspx

    Protected mode is customizable, so you don't really have a reliable way of detecting it based on action violation. And many users disable UAC, so detecting via version won't work either (not to mention you may have the same headache with IE 8. Always best to test if something worked instead of assume by version.)

  • Adobe Flash may behave differently (or not as expected) in Vista IE7 Protected Mode. The intention is to provide this feedback to the user.

    Mr. Shiny and New : If I had Flash which wasn't working in protected mode, I'd try to fix my use of Flash to work around that. Many users won't understand the problem if you explain it, and those that do may be unwilling to turn off protected mode. (unless, of course, you have a safe workaround...)
  • I reviewed the MSDN documentation of the Protected Mode API from 6t7m4, and if I could call the ieframe.dll from javascript, such as:

    var axo = new ActiveXObject("ieframe.dll");
    

    I would be able to access the IEIsProtectedModeProcess() function, but I believe it's inaccesible due to security reasons.

    recursive : You should include these updates as edits to your original question.

How do I do TDD efficiently with NHibernate?

It seams to me as most people write their tests against in-memory, in-process databases (like SQLite) when working with NHibernate, which is fine. I have this up and running but my first test (that uses NHibernate) always takes between 3-4 seconds to execute, next test runs much faster.

I am using FluentNhibernate to do the mapping but get ruffly the same timings with XML mapping files. For me personally 3-4 second test execution seriously disrupts my flow.

What is the recomended way of working with TDD and NHibernate?

Is it possible to mock ISession to unit test the actuall queries or can this only be done with in memory databases?

From stackoverflow
  • See http://www.autumnofagile.net and http://www.summerofnhibernate.com

    bob : interesting links... thanks
    David : Cool links, and they do have relation to nHibernate, but what specifically do provide to answer the question?
    Webjedi : In the series he uses TDD and shows it in action.
  • Have you tried changing some of the defaults in the optional configuration properties? The slowdown is most likely related to certain optimizations nhibernate does with code generation.

    http://nhforge.org/doc/nh/en/index.html#configuration-optional

    It seems like an in memory db is going to be the fastest way to test a your data layer. It also seems once you start testing your data layer you're moving a little beyond the realm of a unit test.

  • I am using the Repository Pattern to perform Database operations, and whenever I run my Tests I just run the higher-level tests that simply Mock the Repository (whit RhinoMocks).

    I have a seperate suite of tests that explicitly tests the Repository layer and the NHibernate mappings. And those usually don't change as much as the business and gui logic above them.

    That way I get very fast UnitTests that never hit the DB, and still a well tested DB Layer

    Mendelt : Those separate tests are more in the realm of integration tests anyway. Nice that you can automate them but it's good to keep them separate. ^1
    Tigraine : Yeah, nhibernate makes it very easy to do those tests by having the database re-created every time you start a test. That takes time and my NHibernate Tests take about a Minute to run, but still, they run and I know when things don't work.
    c.sokun : how about testing method that require query data or save operation that require foreign key etc i could imagine it a lot of work for loading data for testing
  • Unit testing data access is not possible, but you can integration test it. I create integration test for my data access in a seperate project from my unit tests. I only run the (slow) integration tests when I change something in the repositories, mapping or database schema. Because the integration tests are not mixed with the unit tests, I can still run the unit tests about 100 times a day without getting annoyed.

    maz : I'm not sure you are correct about that. It really depends on HOW you write tests. If your tests are Atomic, Order indepenent and isolated, Intention Revealing, Easy to setup and Fast. They are in every sense UnitTests. See: http://codebetter.com/blogs/jeremy.miller/archive/2005/07/20/129552.aspx
    Paco : Some test that interacts with the file system, a database or any other system is not independend, so not a unit test. I know J.Miller agrees with that. With fast is meant able to run thousend of tests in 10 seconds. Whatever you do, you won't reach that speed with db integration tests.
    maz : Yes, I agree that "...test that interacts with the file system, a database or any other system is not independend..." but what I am talking about here is to run against an in-process, in-memory db. This is no different from any other in-memory object you might have in your tests today.

Simple JavaScript problem: onClick confirm not preventing default action

I'm making a simple remove link with an onClick event that brings up a confirm dialog. I want to confirm that the user wants to delete an entry. However, it seems that when Cancel is clicked in the dialog, the default action (i.e. the href link) is still taking place, so the entry still gets deleted. Not sure what I'm doing wrong here... Any input would be much appreciated.

EDIT: Actually, the way the code is now, the page doesn't even make the function call... so, no dialog comes up at all. I did have the onClick code as:

onClick="confirm('Delete entry?')"

which did bring up a dialog, but was still going to the link on Cancel.

<%@ taglib prefix="c" uri="http://java.sun.com/jstl/core_rt"%>
<%@ taglib prefix="fmt" uri="http://java.sun.com/jstl/fmt_rt"%>
<%@ taglib uri="http://java.sun.com/jsp/jstl/functions" prefix="fn" %>

<script type="text/javascript">

function delete() {
    return confirm('Delete entry?')
}

</script>


...

<tr>
 <c:if test="${userIDRO}">
    <td>
        <a href="showSkill.htm?row=<c:out value="${skill.employeeSkillId}"/>" />
     <img src="images/edit.GIF" ALT="Edit this skill." border="1"/></a>
    </td>
    <td>
        <a href="showSkill.htm?row=<c:out value="${skill.employeeSkillId}&remove=1"/>" onClick="return delete()"/>
        <img src="images/remove.GIF" ALT="Remove this skill." border="1"/></a>
    </td>
 </c:if>
</tr>

From stackoverflow
  • I use this, works like a charm. No need to have any functions, just inline with your link(s)

    onclick="javascript:return confirm('Are you sure you want to delete this comment?')"
    
    some : There is also no need to use "javascript:". But like siukurnin said, you should use POST if you change something.
  • First of all, delete is a reserved word in javascript, I'm surprised this even executes for you (When I test it in Firefox, I get a syntax error)

    Secondly, your HTML looks weird - is there a reason you're closing the opening anchor tags with /> instead of just > ?

    kchau : Wow, brain fart. Didn't think to check 'delete' as a keyword. Kind of a "duh" when so many other languages have it reserved. :P
    some : XHTML uses /> to close empty tags... That means that the in the code from the original post, the a-tags are closed twice!!
  • Using a simple link for an action such as removing a record looks dangerous to me : what if a crawler is trying to index your pages ? It will ignore any javascript and follow every link, probably not a good thing.

    You'd better use a form with method="POST".

    And then you will have an event "OnSubmit" to do exactly what you want...

    Peter Bailey : This is an excellent point. Not only is it potentially dangerous to your application, it's actually a violation of the HTTP spec.
    kchau : JW. Can a crawler access records without being able to login?
    Benry : No, crawlers would not be able to access protected portions of your site.
    some : It the "protection" depends on javascript, the crawler (and anyone who has turned of javascript) can access it.
  • I've had issue with IE7 and returning false before.

    Check my answer here to another problem: Javascript not running on IE

  • There's a typo in your code (the tag a is closed too early). You can either use:

    <a href="whatever" onclick="return confirm('are you sure?')"><img ...></a>
    

    note the return: the value returned by scripts in intrinsic evens decides whether the default browser action is run or not; in case you need to run a big piece of code you can of course call another function:

    <script type="text/javascript">
    function confirm_delete() {
      return confirm('are you sure?');
    }
    </script>
    ...
    <a href="whatever" onclick="return confirm_delete()"><img ...></a>
    

    (note that delete is a keyword)

    For completeness: modern browsers also support DOM events, allowing you to register more than one handler for the same event on each object, access the details of the event, stop the propagation and much more; see DOM Events.

    kchau : Thanks, didn't notice the typo.
  • Thanks, paranic! Works like a charm. Tested on Firefox.

  • Hi all,

    Well, i used to have the same problem and the problem got solved by adding the word "return" before confirm:

    onClick="return confirm('Delete entry?')"

    I wish this could be heplful for you..

    Good Luck!

Quickest way to get list of <title> values from all pages on localhost website

I essentially want to spider my local site and create a list of all the titles and URLs as in:

http://localhost/mySite/Default.aspx      My Home Page
http://localhost/mySite/Preferences.aspx  My Preferences
http://localhost/mySite/Messages.aspx     Messages

I'm running Windows. I'm open to anything that works--a C# console app, PowerShell, some existing tool, etc. We can assume that the tag does exist in the document.

Note: I need to actually spider the files since the title may be set in code rather than markup.

From stackoverflow
  • Ok, I'm not familiar with Windows, but to get you in the right direction: use a XSLT transformation with

    <xsl:value-of select="/head/title" /> in there to get the title back or if you can, use the XPath '/head/title' to get the title back.

  • A quick and dirty Cygwin Bash script which does the job:

    #!/bin/bash
    for file in $(find $WWWROOT -iname \*.aspx); do
      echo -en $file '\t'
      cat $file | tr '\n' ' ' | sed -i 's/.*<title>\([^<]*\)<\/title>.*/\1/'
    done
    

    Explanation: this finds every .aspx file under the root directory $WWWROOT, replaces all newlines with spaces so that there are no newlines between the <title> and </title>, and then grabs out the text between those tags.

    Larsenal : This doesn't seem to quite work. What am I doing wrong?
  • I think a script similar to what Adam Rosenfield suggested is what you want, but if you want the actual URLs, try using wget. With some appropriate options, it will print out a list of all the pages on your site (plus download them, which maybe you can suppress with --spider). The wget program is avaliable through the normal Cygwin installer.

    Dustin : Yeah, that is what I was trying to get working to post here! Here's a snippet: site=mysite.com wget --recursive --accept \*.html http://$site ;for file in $( find $site -name *.html ); do // adam's for-body
  • I would use wget as detailed above. Be sure you don't have any spider traps on your site.

  • you should consider using scrapy shell

    check out

    http://doc.scrapy.org/intro/tutorial.html

    in console put something like this :

    hxs.x('/html/head/title/text()').extract()

    if you want all titles, you should do a spider...it really easy.

    Also consider to move to linux :P

Lookup table size reduction

Hello:

I have an application in which I have to store a couple of millions of integers, I have to store them in a Look up table, obviously I cannot store such amount of data in memory and in my requirements I am very limited I have to store the data in an embebedded system so I am very limited in the space, so I would like to ask you about recommended methods that I can use for the reduction of the look up table. I cannot use function approximation such as neural networks, the values needs to be in a table. The range of the integers is not known at the moment. When I say integers I mean a 32 bit value.

Basically the idea is use some copmpression method to reduce the amount of memory but without losing many precision. This thing needs to run in hardware so the computation overhead cannot be very high.

In my algorithm I have to access to one value of the table do some operations with it and after update the value. In the end what I should have is a function which I pass an index to it and then I get a value, and after I have to use another function to write a value in the table.

I found one called tile coding http://www.cs.ualberta.ca/~sutton/book/8/node6.html, this one is based on several look up tables, does anyone know any other method?.

Thanks.

From stackoverflow
  • I need more detail on the problem. If you cannot store the real value of the integers but instead an approximation, that means you are going to reduce (throw away) some of the data (detail), correct? I think you are looking for a hash, which can be an artform in itself. For example say you have 32 bit values, one hash would be to take the 4 bytes and xor them together, this would result in a single 8 bit value, reducing your storage by a factor of 4 but also reducing the real value of original data. Typically you could/would go further and perhaps and only use a few of those 8 bits , say the lower 4 and reduce the value further.

    I think my real problem is either you need the data or you dont, if you need the data you need to compress it or find more memory to store it. If you dont, then use a hash of some sort to reduce the number of bits until you reach the amount of memory you have for storage.

    Ryan : Yes you are right I have to use some method to throw away some data but withot losing many precision. I have to use a compression method but that is fast in execution time because it needs to be executed in real time. Has the algorithm that you said based on XOR operations a name?.
  • I'd look at the types of numbers you need to store and pull out the information that's common for many of them. For example, if they're tightly clustered, you can take the mean, store it, and store the offsets. The offsets will have fewer bits than the original numbers. Or, if they're more or less uniformly distributed, you can store the first number and then store the offset to the next number.

    It would help to know what your key is to look up the numbers.

    Ryan : This is a RL application those values are the Q Values.
  • Read http://www.cs.ualberta.ca/~sutton/RL-FAQ.html

    "Function approximation" refers to the use of a parameterized functional form to represent the value function (and/or the policy), as opposed to a simple table."

    Perhaps that applies. Also, update your question with additional facts -- don't merely answer in the comments.


    Edit.

    A bit array can easily store a bit for each of your millions of numbers. Let's say you have numbers in the range of 1 to 8 million. In a single megabyte of storage you can have a 1 bit for each number in your set and a 0 for each number not in your set.

    If you have numbers in the range of 1 to 32 million, you'll require 4Mb of memory for a big table of all 32M distinct numbers.

    See my answer to http://stackoverflow.com/questions/311202/modern-high-performance-bloom-filter-in-python#311360 for a Python implementation of a bit array of unlimited size.

    Ryan : Function approximation is a very common solution, and the normal solution in this problems, sadly I need to run this in an embebedded system so I cannot use the common function approximation due to the computation overhead.
    S.Lott : "computation overhead" usually false. If you don't have benchmark data for your function, you'll waste a lot of time on a lookup that turns out to be slower.
  • If you are merely looking for the presence of the number in question a bloom filter, might be what you are looking for. Honestly though your question is fairly vague and confusing. It would help to explain what Q values are, and what you do with them once you find them in the table.

sql search query for multiple optional parameters

I'm trying to write a query for an advanced search page on my document archiving system. I'm attempting to search by multiple optional parameters. I have about 5 parameters that could be empty strings or search strings. I know I shouldn't have to check for each as a string or empty and create a separate stored procedure for each combination.

Edit: Ended up using:

ISNULL(COALESCE(@var, a.col), '') = ISNULL(a.col, '')
From stackoverflow
  • You can put OR's in your WHERE clause like so:

    WHERE 
       (@var1 = '' OR col1 = @var1) AND
       (@var2 = '' OR col1 = @var2) AND
       (@var3 = '' OR col1 = @var3) ...
    
    Timothy Khouri : While this solution will work, it's incredibly expensive. Don't use OR... instead use the ISNULL (example above).
    G Mastros : This solution will work in ALL cases. The IsNull/Coalesce solution will only work under controlled circumstances. When you use Coalesce, you are still testing for a column to EQUAL a value. If the value in the column is NULL, it will not be EQUAL and the row will NOT be returned.
  • You can pass optional parameters to a stored procedure but the optimizer will build a plan based on the specific calls you make to that proc. There are some tricks in SQL Server 2005 and later to avoid this (parameter sniffing, 'with no compile' hints, etc.)

    Even with that, tho, I prefer to build a view with the core of the query and then use that view in several procs with the specific parameters. That allows SQL to optimize as it wants/should and I still get to consolidate the query specifics.

  • You could use COALESCE (or ISNULL) like so:

    WHERE COALESCE(@var1, col1) = col1 
    AND COALESCE(@var2, col2) = col2 
    AND COALESCE(@var3, col3) = col3
    
    G Mastros : This solution will not work if the column value is NULL because you cannot NULL cannot be tested that way. If the value is NULL, the row will be filtered out. This is not what you want.
  • Even better is to make the parameter optional NULL and then test in the WHERE clause just like the empty string case...

  • I usually do this :P

    WHERE (@var1 IS NULL OR col1 = @var1) AND (@var2 IS NULL OR col2 = @var2) ...

  • An alternative is to dynamically built the SQL in the Stored Procedure, this produces the best possible plan for the query and a plan will be created and used anyway (in 2005 and above).

  • Hello there,

    Thank you very very much!!!. it works for me. Thanks a lot

When to use FOR-CASE (Foreach/switch in C#)?

I've found what seems to be the C# equivalent of a FOR-CASE structure in a proyect I'm working on:

foreach (string param in params.Split(';'))
{
    string[] parts = param.Split('=');
    string key = parts[0].Trim().ToLower();
    string value = parts[1].Trim();
    switch (key)
    {
        case "param1": this.param1 = value; break;
        case "param2": this.param2 = value; break;
        case "param3": this.param3 = value; break;
        case "param4": this.param4 = value; break;
        default: break;
    }
}

(Variable names changed to protect the guilty.)

How would you implement this code?

From stackoverflow
  • I don't think the code in your question is anything like the code you linked to....

    The code in the question looks like something I might do if I wrote a command line tool.

    Am I stupid for not seeing whats wrong with the code in the question?

    An alternative is to use reflection to fill parameter value variables. I've done it that ways sometimes too.

    BTW: I once wrote a program in a script language that had switch as the only flow control mechanism and no gosub/return. The code in my program was structured a bit like the one you linked to. A massive switch on a sort of instruction pointer variable that got reassigned at the end of every case and an almost infinite loop around the switch. It got the job done.

    Harper Shelby : If I were doing this, I'd probably create a dictionary based on the input string, and assign the params from the dictionary: param1 = input_map["param1"]; Sure, you'd have to catch a KeyNotFoundException, but the code would look cleaner IMHO.
    Nathan Koop : I agree with your assesment
    John MacIntyre : " I once wrote a program in a script language that had switch as the only flow control mechanism ..." Sounds like a 'C' Windows message pump.
    Guge : @Harper - I try not to use dictionaries where I can use a class. My default in the switch can take care of telling the user that the key was unknown, as can I with the reflection solution. Also the " you would have to use to fetch from Dictionary blinds the compiler.
    Guge : @John - The language was called "BRS MNS". It was a menu system for a text database. I did it when I was 20. I'm still proud of it.
  • I see you that you already have multiple fields in your class that you use to hold the variables. In that case, what you are doing is fine.

    Otherwise, you can have 1 HashTable (maybe add in the C# indexor as a twist) to hold all of them, and your loop will end up like this:

    foreach (string param in params.Split(';'))
    {
        string[] parts = param.Split('=');
        string key = parts[0].Trim().ToLower();
        string value = parts[1].Trim();
        MyHashTable[key] = value;
    }
    

    The problem with this approach is that you should only have 1 type of value. For example, if your param list can contain both string and int types, it makes the code messier, especially you need to perform error checking and validation and stuff.

    I personally would stick with what you already have.

  • Not sure if I understand either but it sounds like you're complicating yourself. Don't reinvent the wheel, use BCL classes as much as you can, these classes are proven to work efficiently and save you lots of time. Sounds like you could implement it with some sort of Dictionary<,> along with, like Guge suggested, Reflection.

    leppie : It seems natural to me that way too.
  • You could use reflection for this:

    Type t = this.GetType();
    foreach (string param in params.Split(';'))
    {    
        string[] parts = param.Split('=');    
        string key = parts[0].Trim().ToLower();    
        string value = parts[1].Trim();    
    
        t.GetProperty(key).SetValue(this, value, null);
    }
    
    Guge : I've done this a few times but I have always used a designated class, or marked available properties with a custom attribute.
    Jon B : Same here. I prefer using an attribute and an arbitrary key name or enum so I can rename my properties at will. However, I thought this example was best for the code sample provided.
    Coderer : What happens if one of the passed values isn't a valid property name? The OP's code safely ignores bad params...
    Jon B : @Coderer - you would want to add error handling or check that the property exists before trying to set it. My example is just meant to get the OP started, not to be complete production code.
  • Or Regex:

    string parms = "param1=1;param2=2;param3=3";
    string[] parmArr = parms.Split(';');        
    
    string parm1 = Regex.Replace(parmArr[0], "param1=", "");
    string parm2 = Regex.Replace(parmArr[1], "param2=", "");
    string parm3 = Regex.Replace(parmArr[2], "param3=", "");
    
    GalacticCowboy : Only works if the params are in that order. "param2=2;param3=3;param1=1" will fail, whereas the OP will handle it. And the "regex" portion is superfluous, as you're just doing a straight string replace.
  • For what it's worth, the WTF article was a WTF because its outer loop was completely useless, as noted in the article - it was just as easy, and more direct, just to set an index variable directly than to loop and test it.

  • I actually think the OP's code is fine. It's not perfect -- there might be simpler or cleaner ways to do it, but it effectively allows for readable mappings between member/property names and input-parameter names. It leaves your properties strongly typed (unlike the hashmap/dictionary solutions, unless your class has only one type for all its properties...) and gives you one fairly-obvious place to fix or add mappings.

Which is the best opensource project which uses lucene extensively?

Either in .net or java.

From stackoverflow
  • Does hibernate search count?

    suhair : yes indeed. Thanks
  • I believe ScrewTurn Wiki is using it see here for more info.

  • Wikipedia uses Lucene, as documented here.

    Karl : Why was this downvoted?
    jamesh : Wikipedia is based upon Mediawiki. Mediwiki is written in PHP; question asks for Java or .NET. There does turn out to be a Lucene extension of Mediawiki, but this answer doesn't give any links to this.
    Avi : Wikipedia is a large open source project, and uses Lucene for search: http://www.mediawiki.org/wiki/Extension:Lucene-search
    Bajji : The information is still relevant to other gawkers in this discussion.
  • The Fedora repository software uses Lucene and Solr quite extensively for search implementation, with Fedora's use of extensible content models plugging into Lucene's ability to handle metadata quite gracefully.

  • neo4j uses Lucene.

  • The products of XWiki too. :)

PHP $string{0} vs. $string[0];

In PHP you can access characters of strings in a few different ways, one of which is substr(). You can also access the Nth character in a string with curly or square braces, like so:

$string = 'hello';

echo $string{0}; // h
echo $string[0]; // h

My question is, is there a benefit of one over the other? What's the difference between {} and []?

Thanks.

From stackoverflow
  • use $string[0], the other method (braces) is being deprecated in PHP6 (src)

    Note: Strings may also be accessed using braces, as in $str{42}, for the same purpose. However, this syntax is deprecated as of PHP 6. Use square brackets instead.

    Jay : Will the use of {} as a method of evaluation be removed also? E.g ${$dynamic_object_name}->doStuff()
    Owen : nope, i believe their goal is to just standardize accessing strings as array.

PHP Script to execute multiple URLs?

Is there a sort of php script which will run a series of URLs and then direct the user to the final destination? the use of this is: creating a checkout cart on a site that doesn't have a robust "wishlist" feature.

The script runs a series of "add item to cart" urls, and then the final destination takes the user to their cart of products i've picked out for them.

From stackoverflow
  • See http://php.net/curl

    edit: As for managing remote sessions through cURL, it depends how the remote site tracks sessions. Some use cookies (which cURL supports), some generate a sessionid token that you have to pass back in subsequent requests, either as a parameter or in the http header.

    The docs for PHP's cURL API are pretty sparse, so you may have to hunt for more complete tutorials. I found the following by Googling for "curl cookie tutorial":

    http://coderscult.com/php/php-curl/2008/05/20/php-curl-cookies-example/

    Tom Haigh : How would this work - would you pass the user's session ID to curl? For a shopping cart the script needs to see the end-user's session data. By just using curl wouldn't a new session get created on every request, because it is requested from PHP and not the user's browser?
  • Yes you can do this with ajax.

    Use jQuery to do your ajax requests.

    e.g

    $.get("http://mywebsite.com/json/cart_add.php?pid=25");
    $.get("http://mywebsite.com/json/cart_add.php?pid=27");
    

    If you use sessions then it will be added to the current session providing it is on the same domain.

    MrChrister : That is slick. Is there a security issue to worry about? I can't think of any, but I am not necessarily the best at figuring them out.
    Jay : Not really, they would have to hi-jack your session id which should be fine if you NEVER pass it via the url.
    MrChrister : So as long as the cart_add.php was giving back good responses, no worries. This attack wouldn't apply because you can mitigate the response, right? http://haacked.com/archive/2008/11/20/anatomy-of-a-subtle-json-vulnerability.aspx
    Jay : If your pages are protected against SQL injection on any REQUEST parameters (POST & GET) and you do NOT accept anything that allows another to hi-jack then it is no different from visiting the page in the address bar. You could also use an iframe for this but thats what ajax takes away the need for!
    MrChrister : Cool. Thank you. I am going to try your method in my project then. +1
    Jay : As far as personal information is concerned (secure data), a) you should be doing it over SSL and b) you should never have a public page that dishes out user data simply by passing the user / customer id. It's up to the coder really whether they make it insecure or not.
    Jay : FYI all my json data scripts need to have active sessions in order for them to work, if you are using database sessions this can get quite tough, but it is workable using a token which combines the client user agent with some other user specific data in order to pass to the json responder.
    Jay : This is AJAX however and does not do any JSON so the OP (original poster) should be safe!
  • it really depends on specifics of your site.

    if its oo, you may be able to call the relevant methods one after the other to add items to the basket? or you may be able to do this with includes?

    or it may be that the site has some include files you can use?

    or it may have a mechanism to redirect users after adding items to the basket that you can take advantage of?

    or if not, there's other answers that have appeared whilst i was writing that suggest valid ways to achieve this with javascript or curl.

  • ok i'm going to try the ajax suggestion but i'm not sure how the code is formatted with get and post. this is what i've started and it doesn't fetch the url(i swapped in generic urls for demonstration);

     <html>   
     <head>                                        
     <script type="text/javascript" src="jquery-1.2.6.min.js"></script>          
     <script type="text/javascript">      
     $(document).ready(function() {    
     $("a").click(function(){    
     $.get("http://www.store.com/item4");    
     $.get("http://www.store.com/item5");
     alert("Items Added, Now Redirecting");           
     });    
     });                                                         
     </script>                                                         
     </head>  
     <body>
     <a href="">Link</a>                                                                                               
     </body>                                                                        
     </html>
    
    MrChrister : What does the script you are targeting output after it adds the items to the cart?
    mrtunes : nothing, i added a post line and it didn't help $.get("http://www.store.com/item5"); $.post("http://www.store.com/cart.php");
    MrChrister : Put some outputting code in the script that adds items to the cart. Then use something like this $.get("http://www.store.com/item5", function(data){ alert("Data Loaded: " + data); }); To see your output from the page calling the $.get
    mrtunes : hmm, i gave that a shot to no avail. do you know if there's a working example of this sort of thing anywhere?
    Jay : you should access the urls you are trying directly to ensure there is not an error in your script. Also make sure you are debugging in Opera or Firefox because IE is awful for javascript debugging unless you have some particular tools installed.
    Jay : Any PHP errors generated when the javascript accesses a script via ajax should show up in the javascript console.