Tuesday, March 1, 2011

What are some of the not-so-common issues with getting remote access to a SQL 2k8 instance?

Howdy... here on my local LAN, I have a Windows Server 2k8 box with SQL Server 2k8 installed. I can connect to the database engine using the SSMS tool on the server, but attempting to connect to the database engine from a remote machine (also on the same LAN), the connection fails with the usual generic message about not being able to contact the server.

Before you offer the "usual" solutions, please let me say that I have already verified the instance name, verified that the instance is configured to allow remote connections, verified that the SQL Browser service is running, and verified that neither the Windows Firewall on the server nor on the client is getting in the way (tested with both completely disabled). I've even attempted to connect via the servers IP address on the LAN rather than it's Windows hostname. :)

Does anyone know of any "uncommon" reasons (or even common ones that I failed to mention here) why I would be unable to connect to the database engine from a remote machine?

Thanks.

From stackoverflow
  • In SQL Server Configuration Manager in Network Configuration-> Protocols enable TCP/IP or other protocols that you need. The default is: only Shared Memory.

  • You mentioned that you tried with the windows firewall disabled on both client and target, but is there also a virus scanner running on either/both machines? I've been caught out in the past where the virus checker has an integral firewall as well.

    Failing that this MS forum article contains a list of things to check.

  • I would check to see if you enabled remote connections on the specific IPv6 addresses, as if your are on the local network and connecting from Vista you may be connecting over IPv6.

  • http://blogs.msdn.com/sql_protocols/archive/2006/09/30/SQL-Server-2005-Remote-Connectivity-Issue-TroubleShooting.aspx

Are there side effects to using string concatenation or jQuery inside jquery selectors?

while exploring jQuery I came up with the following weird script. I don't see myself doing this really however concatenating strings to achieve a variable name is not unusual in JavaScript.

Any feedback welcome.

...
    <script type="text/javascript">
        var a = 'y';
        $(document).ready(function() {
            $('p[id^=' + $('p[id=x]').html() + a + "]").css('color','blue');
        });
    </script>
...

<p id="x">2a</p>
<p id="2ay_">mytext</p>
From stackoverflow
  • Short answer is no, there aren't side-effects. Your example is quirky, but as you said you were just exploring to see what you could do. I have used string concatenation and function value returns inside jQuery selectors before, it can be a useful technique for managing sets of related elements which reside in different parts of the DOM.

    I admit I haven't used jQuery.html() inside a selector, but there isn't any inherent reason why that's "bad". I just can't think of a situation where that'd be strictly necessary. It does make the code rather... unusual, and hence more difficult to understand and maintain.

    If you actually find a use for this in production code, please let us know. I'd be fascinated to see what it is. :)

    Florin : Just wondering about some black magic in jQuery that would give unwanted side effects inside selectors. What I find myself needing is a to pass the 'search' string to the selector at runtime. Could by the way, jQuery do with this pattern: $('p[id=?]').click(paramValue, function() {}); ?
  • In my opinion its unreadable, unmaintainable...anything like that should be avoided. Imagine you joined a new company and they put you in front of a whole bunch of js/jquery code that looked like that!!

    P.S you should use addClass to change the color. I would rather change a css class after all it is a style setting and is also more extensible in that you can add other styles without any more js code.

  • No, nothing weird. Other than maybe some unwanted results if someone/you went and modified the HTML within the p element. It is definitely pretty weird the way it's coded, but nothing really wrong with it (other than being unreadable/unmaintainable).

    Because jQuery is nothing special -- it's just a Javascript library. You aren't using some special JS syntax or anything, jQuery is simply taking a string and using it as a selector. So that string can be constructed any way you see fit.

  • Just wondering about some black magic in jQuery that would give unwanted side effects inside selectors. What I find myself needing is a to pass the 'search' string to the selector at runtime.

    Could by the way, jQuery do with this pattern:

    $('p[id=?]').click(paramValue, function() {}); where the paramValue replaces the placeholder '?' ?

  • Yep no reason you can't do this. Since $ is actually a function call itself anything that needs to be evaluated in order to pass a parameter to it is evaluated first.

why is this legal, c++ typedef func

i did this in msvc 2005.

typedef void (*cleanup_t)();

void func(cleanup_t clean)
{
    cleanup_t();
}

Why does this compile? and not give me a warning? ok, it gave me a unreferenced formal parameter warning but originally i did this when clean was in a class no there was no unreferenced formal parameter when this code gave me problems.

What is cleanup_t(); really doing and what is the point? now for laughs i tried int() and that worked also.

From stackoverflow
  • I think it's an expression that evaluates to a default value for a cleanup_t type. In other words, an expression that returns a NULL pointer to a function that returns void.

    In C/C++, expressions with no side effects (which this is - I think) are valid statements, just like you can have a statement like:

    1 + 2;
    

    That's not a syntax error, but some compilers might give a warning. They don't often give a warning for side-effect-less expressions that return NULL values or are simply variable names because that type of expression is often used in macros for debugging purposes (like the assert() macro).

    You can think of it as calling the default constructor for the cleanup_t type. Having this default constructor-like syntax for built-in types (or typedef's of them) was added to C++ so that templates could set items of the type passed in as a template parameter to default values while still allowing the template type parameter to be a non-user defined type. There might other reasons, but I believe that to be one of them.

    Something like:

    template <class T>
    class foo
    {
        T myT;
    
        public:
    
        foo() {
            myT = T();
        };
    };
    
    typedef void (*cleanup_t)();
    
    
    class bar
    {
    };
    
    
    int not_quite_a_cleanup_t_func()
    {
        return 1;
    }
    
    
    int main()
    {
        foo<int> intFoo;
        foo<cleanup_t> cleanup_t_foo;
        foo<bar> barFoo;
    
        // here I'm going to harp on one of the things I don't like about C++:
        //
        //  That so many things that look like function calls are not or that
        //  the parens cause subtle behavior changes.
        //
        //  I believe this is the reason this question was posted to 
        //  stackoverflow, so it's not too far off topic.
        //  
        //  Many of these things exist because of backwards compatibility with C or
        //  because they wanted to fit in new features without adding keywords or
        //  new reserved tokens or making the parser even more complex than it already
        //  is.  So there are probably good rationales for them.
        //
        //  But I find it confusing more often than not, and the fact that there
        //  might be a rationale for it doesn't mean I have to like it...
    
        cleanup_t cleanup1();    // declares a function named cleanup1 that returns a cleanup_t
    
        cleanup_t cleanup2 = cleanup_t();   // cleanup2 is a variable of type cleanup_t that 
                                            //  is default initialized
    
        cleanup_t* cleanup3 = new cleanup_t;    // cleanup3 is a pointer to type cleanup_t that 
                                                //  is initialized to point to memory that is 
                                                //  *not* initialized
    
        cleanup_t* cleanup4 = new cleanup_t();  // cleanup4 is a pointer to type cleanup_t that
                                                //  is initialized to point to memory that *is*
                                                //  initialized (using default intialization)
    
        cleanup2 = cleanup_t( not_quite_a_cleanup_t_func);  // explicit type conversion using functional notation
    
        cleanup_t();    // the OP's problem
        cleanup2();     // call the function pointed to by cleanup2
        (*cleanup2)();  // same thing
    
        class cleanup_class
        {
            cleanup_t cleanup5;
    
        public:
            cleanup_class() : 
                cleanup5() // class member default initialization
            { };
        };
    }
    
  • It's executing a default initializer for the cleanup_t type to create a temporary of that type, and then never actually using that temporary.

    It's a lot like a constructor call, the "MyClass()" part of "MyClass c = MyClass();", except that pointer-to-function types don't actually have constructors. Of course in my code snippet here, "MyClass()" doesn't necessarily create a temporary, because it's an initializer expression. The "MyClass()" in "MyClass().some_method();" is perhaps a closer analogy.

    "int()" is another way of saying "int(0)", which is another way of saying "(int)0", which is another way of saying "0". Again, it assigns to a temporary, and if that's the whole statement then the temporary is unused.

    If you compile the code in the question with -Wall on GCC, you get a warning "statement has no effect". The code a person doing this might have meant to type, "clean();", wouldn't produce that warning because of course it would have the effect of calling the function. Yet another reason to switch warnings on, and fix 'em properly ;-)

page not loading in javascript

I have got a button with an onclick javascript event that does some form validation. If the validation fails, false is returned. Otherwise nothing is returned, and the form should be submitted.

But what's happening is the url loads in the address bar but the page never loads. No headers are sent, no error messages display. Just a blank page.

This seems to only happen in IE.

From stackoverflow

How to use the TabContainer control inside a templated FormView?

Is it possible to use a TabContainer inside a templated FormView like so:

            <ItemTemplate>
            <cc1:TabContainer ID="TabContainer1" runat="server">
                <cc1:TabPanel ID="Tab1" runat="server">
                    <HeaderTemplate>Tab One</HeaderTemplate>
                    <ContentTemplate>
                    ... bound fields  
                    </ContentTemplate>
                </cc1:TabPanel>
                <cc1:TabPanel ID="Tab2" runat="server">
                    <HeaderTemplate>Tab 2</HeaderTemplate>
                    <ContentTemplate>
                    ... bound fields    
                    </ContentTemplate>
                </cc1:TabPanel>
            </cc1:TabContainer>
        </ItemTemplate>

        <EditTemplate>
            <cc1:TabContainer ID="TabContainer1" runat="server">
                <cc1:TabPanel ID="Tab1" runat="server">
                    <HeaderTemplate>Tab One</HeaderTemplate>
                    <ContentTemplate>
                    ... bound fields  
                    </ContentTemplate>
                </cc1:TabPanel>
                <cc1:TabPanel ID="Tab2" runat="server">
                    <HeaderTemplate>Tab 2</HeaderTemplate>
                    <ContentTemplate>
                    ... bound fields    
                    </ContentTemplate>
                </cc1:TabPanel>
            </cc1:TabContainer>
        </EditTemplate>

Everything works fine for only one template view at a time; for example if ItemTemplate works then EditTemplate won't. ASP.NET will complain about duplicate bound field IDs.

Has anybody tried doing what I'm trying to do?

Thanks.- Gene

EDIT :

I don't think the tab containers with the same IDs is the issue here since they are both inside separate Template elements and only one Template gets rendered at a time.

UPDATE:

I didn't manage to find a solution, and I think it's not possible. So, just moved on and use unique IDs. Being lazy, I wrote some code to automate the dreaded naming process. I hope someone out there has a better answer to share. Anyway, I'm too poor to afford to put a bounty on it. ;-)

From stackoverflow
  • Haven't used the Tab container much but you need to define unique ID's for each element on the page.

    <cc1:TabContainer ID="TabContainer1" runat="server">
    <cc1:TabContainer ID="TabContainer2" runat="server">
    
  • Did you ever find the solution to this. I"m having the same problem and I don't want to rename all my controls

  • I solved this the hard way, by, changing the duplicate field names in each of the tabs.

Groovy in Ant build.xml (with Java classes)

I have to include Groovy classes into existing Java apps , and include Groovy into Ant build.xml file. What is the best way to configure Ant build.xml for it?

Thanks Tatyana

I was suggested to update my question, but I am not exactly sure what it means. Tatyana

Urs Reupke wrote: Please update your question to help future answers become more helpful. – Urs Reupke

From stackoverflow
  • To use Groovy in your ant script, you basically have to declare the Groovy ant task:

    <project name="groovy-build" default="listSourceFiles">
    
    <taskdef name="groovy"
         classname="org.codehaus.groovy.ant.Groovy"/>
    <groovy>
        ant.... // some ant groovy directives
    </groovy>
    </target>
    </project>
    

    However, you have to be careful, in your ant.xml, to refer to fileset within your current target.

  • @VonC is correct about including Groovy scripting in your Ant build.

    To expand a bit:

    To compile .groovy and .java sources together for use in the same application, use the <groovyc> Ant task.

    See Ant Integration with Groovy for the reference.

  • Are there more specifics in combining Java and Groovy compilations? Sequence of tasks?

    Ken Gentle : groovyc will compile groovy and java source (first groovy to java, then compile all java files if you want to look at it that way.) so you don't need another step. Or you can separate the java compilation from the groovy and include the compiled/jar-ed classes in the groovyc task.
    Urs Reupke : Please update your question to help future answers become more helpful.

ASP.NET Login to a Website with Forms Authentication vs None

Hi folks,

I've got a stock standard ASP.NET website. Anyone can read/view any page (except the admin section) but when someone wants to contribute, they need to be logged in. Just like most contribution sites out there.

So, if i have my OWN login control or username/password/submit input fields, why would i want to have forms auth turned on instead of just none? what does forms auth give me, which having my own code that check my database for a user/pass and my own two input fields + a submit button, does the job perfectly?

(NOTE: i really dislike the asp.net membership stuff that creates all those tables and usp's in the database, so please don't suggest I use that).

Like, with my code, when i authenticate a user (with my own database code), i manually create my own identity, etc.

is all this required? what is the main purpose of this?

cheers!

From stackoverflow
  • You can authorize your users how ever you want. FormAuthentication is used to set the session identity and the authentication cookie that allows users to stay logged in until they logout or the session expires. You don't need to use the membership providers to use FormsAuthentication. It sounds like you are just replicating this functionality.

    ...do your authentication against your DB or Active Directory
    
    if (Request.QueryString["ReturnUrl"] != null)
    {
        FormsAuthentication.RedirectFromLoginPage(userName.Text, false);
    }
    else
    {
        FormsAuthentication.SetAuthCookie(userName.Text, false);
    }
    

    Then you need to set up to use it in the web.config

    <system.web>
      <authentication mode="Forms">
        <forms loginUrl="Login.aspx"
               protection="All"
               timeout="30"
               name="my-auth-cookie" 
               path="/"
               requireSSL="false"
               slidingExpiration="true"
               defaultUrl="default.aspx" />
      </authentication>
    </system.web>
    

    You get all the benefits of doing your own authorization and not having to implement the cookie infrastructure. Note, since your web site only needs to authorize when editing, you'll need to set the permissions that allow everyone to read all pages and implement your own logic to redirect them to the login page when they want to edit.

    More information here.

    Pure.Krome : so by using forms auth, the asp.net plumbing handles READING IN the cookie, etc ..which then sets up the IDENTITY instance for the current thread? Secondly, i also can't use RedirectFromLoginPage because i was doing AJAX login :) (muhaha!). Lastly, does IIS7 need to have forms auth ON, for the site?
  • Please note that you can leverage the ASP.Net forms authentication and implement your own membership provider which would read/write to your existing tables.

  • I also started with the default forms authentication that comes out of the box with an ASP.NET web project. However I too was very dismayed by the overbuilt features of the tables in that stock database.

    In my work experience I have used custom built authentication and just carried the logged in users profile (self created) in a session variable.

    The asp.net membership can be difficult to port to some shared hosting providers also.

c# performance: type comparison vs. string comparison

Which is faster? This:

bool isEqual = (MyObject1 is MyObject2)

Or this:

bool isEqual = ("blah" == "blah1")

It would be helpful to figure out which one is faster. Obviously, if you apply .ToUpper() to each side of the string comparison like programmers often do, that would require reallocating memory which affects performance. But how about if .ToUpper() is out of the equation like in the above sample?

From stackoverflow
  • How about you tell me? :)

    Take the code from this Coding Horror post, and insert your code to test in place of his algorithm.

  • The first one is used to compare types not values. If you want to compare strings with a non-sensitive case you can use:

    
         string toto = "toto";
         string tata = "tata";
         bool isEqual = string.Compare(toto, tata, StringComparison.InvariantCultureIgnoreCase) == 0; 
         Console.WriteLine(isEqual);  
    
  • According to the book Maximizing .NET Performance the call

    bool isEqual = String.Equals("test", "test");
    

    is identical in performance to

    bool isEqual = ("test" == "test");
    

    The call

    bool isEqual = "test".Equals("test");
    

    is theoretically slower than the call to the static String.Equals method, but I think you'll need to compare several million strings in order to actually detect a speed difference.

    My tip to you is this; don't worry about which string comparison method is slower or faster. In a normal application you'll never ever notice the difference. You should use the way which you think is most readable.

  • Comparing strings with a "==" operator compares the contents of the string vs. the string object reference. Comparing objects will call the "Equals" method of the object to determine whether they are equal or not. The default implementation of Equals is to do a reference comparison, returning True if both object references are the same physical object. This will likely be faster than the string comparison, but is dependent on the type of object being compared.

  • I'm a little confused here.

    As other answers have noted, you're comparing apples and oranges. ::rimshot::

    If you want to determine if an object is of a certain type use the is operator.

    If you want to compare strings use the == operator (or other appropriate comparison method if you need something fancy like case-insensitive comparisons).

    How fast one operation is compared to the other (no pun intended) doesn't seem to really matter.


    After closer reading, I think that you want to compare the speed of string comparisions with the speed of reference comparisons (the type of comparison used in the System.Object base type).

    If that's the case, then the answer is that reference comparisons will never be slower than any other string comparison. Reference comparison in .NET is pretty much analogous to comparing pointers in C - about as fast as you can get.

    However, how would you feel if a string variable s had the value "I'm a string", but the following comparison failed:

    if (((object) s) == ((object) "I'm a string")) { ... }
    

    If you simply compared references, that might happen depending on how the value of s was created. If it ended up not being interned, it would not have the same reference as the literal string, so the comparison would fail. So you might have a faster comparison that didn't always work. That seems to be a bad optimization.

    Robert Rossney : The other answers have missed the key point, which is that the "is" operator doesn't do anything like what the questioner thinks it does.
  • I'd assume that comparing the objects in your first example is going to be about as fast as it gets since its simply checking if both objects point to the same address in memory.

    As it has been mentioned several times already, it is possible to compare addresses on strings as well, but this won't necessarily work if the two strings are allocated from different sources.

    Lastly, its usually good form to try and compare objects based on type whenever possible. Its typically the most concrete method of identification. If your objects need to be represented by something other than their address in memory, its possible to use other attributes as identifiers.

    Benjamin Podszun : You missed the meaning of the "is" operator as well, I think?
  • I have a file name like ABC_KK_123456_2008081218.txt

    here KK is the branch code and 12345 is the employee#

    I would like to parse the file name by branch code and employee#.

    How can I use stringcomparisontype here?

    Any help will be great!

    tx

    Benjamin Podszun : Don't add unrelated questions as answers to other questions. Start a new one.
  • If I understand the question and you really want to compare reference equality with the plain old "compare the contents": Build a testcase and call object.ReferenceEquals compared against a == b.

    Note: You have to understand what the difference is and that you probably cannot use a reference comparison in most scenarios. If you are sure that this is what you want it might be a tiny bit faster. You have to try it yourself and evaluate if this is worth the trouble at all..

Jasper exported to Excel ignoring background color?

Have you ever had alternating background colors in a Jasper report and then exported it to Excel? The Excel export seems to ignore the alternating color.

I've got a Jasper report where the rows alternating background color using the procedure referenced HERE. When I preview it using the viewer or export to PDF it works -- but not when I export to Excel. I've tried using JRXlsExporter and JExcelApiExporter both to no avail.

I think it might be a side-effect of how you have to make alternating row colors in Jasper, which I despise to begin with, but have found no other way.

Thanks in advance!

From stackoverflow
  • Did you try the idea suggested in the comment of the very procedure you are referring to ?

    First how to create new report style with condition:

    Recent releases of JasperReports include report styles, which make this a bit easier - you no longer have to create the rectangle.

    I use iReport to create my styles - there is a “styles” pane that by default is docked with the “Library” pane. If you make it visible you can create a new style in the styles library. In the screen that pops up give the style a name (say “EvenOddRowStyle” and press “Add” under “Style Conditions”. Use one of the expressions that Brian gave and press Apply. and in the “Common” section press the “…” button next to “Backcolor” and pick the background color you want. Finally, when done with your report apply that style to all the fields in the rows you want to highlight. Just drag the style from the styles pane onto the field.

    Then how to define a style which will be applied when exported to Excel:

    defining a new style with the condition expression:

    Boolean.valueOf( $V{PAGE_COUNT}.intValue() % 2 == 0 )
    

    on it without using a rectangle and a print when expression on it!

    discgolfer : I have now and it works! I didn't realize on the link that there was a "show all comments" option, so I didn't review them. Thanks for pointing it out to me! This approach is waaaaay more elegant. Thanks again man!
  • Also, Be sure that the conditional styles you make, have the "opaque" option checked... If not, the background color will never apper in the excel report (never!!!)....

    Also, The "opaque" option must be checked in the text fields....

    And if you noticed that the text fields never took the style that you gave to them, try to set the forecolor and background to null, in the properties panel of each text field (this works for me)...

    Hope this help... bye.

How to cancel an ajax query (from jquery) on the server side?

Sounds like a weird question, but say I have something like this:

$.post( "/myajax.php",
        { "param1": value1, "param2": value2 },
        function( data, status ) {
            if( status == "success" ) {
               $("#someid").html( data );
            }
        }, "html" );

While myajax.php is doing whatever it needs to do, it decides that it does not need to make any changes to #someid. If it just returns, the someid element is blanked, but I want to leave it alone. How can the php query return in such a way that status != "success"?

From stackoverflow
  • Wouldn't it be simpler to check on the data returned?

    $.post( "/myajax.php", { 
         "param1": value1, 
         "param2": value2 
       }, function( data, status ) {
           if( data != "" ) {
               $("#someid").html( data );
           }
       }, 
       "html" 
    );
    
    Owen : yes and no, both are pretty basic ideas i guess, returning a non 200 OK seems more "right" given it changes the status code.
    Eran Galperin : but how do you differentiate it from a real error? I think it's important for the request to complete correctly.
    alastairs : @Eran - I agree. Using the data parameter means you're filtering on the logic of the server-side code (correct), not the HTTP transaction (incorrect).
    alastairs : Additionally, if you return a 4xx or 5xx error code and later decide to put in an error handler on the $.ajax() for failed AJAX requests, you're going to end up breaking the existing logic.
    Graeme Perrow : I originally accepted Owen's answer, but upon reflection, I think I like Eran's idea better. They both work, FWIW, but I agree that this is cleaner.
  • the ajax calls treat any 200 OK as success. as such you could return something else from your PHP script to trigger an error in your ajax.

    here are some other status codes in case you wanted to choose something more appropriate.

    edit: i don't necessarily disagree with that approach, but i'm still leaning more towards operating on the status message (http status) rather than the response body.

    my thoughts are, in a post transaction as the one mentioned, you are updating a resource. one could reasonably assume you expect one of two things two happen (in "success" conditions):

    1. a 200 OK means the content was posted without issue, and the resulting response typically is to show the new content. the ajax method "enforces" this behaviour by allowing you to get the response body to update your UI.

    2. in a scenario where the update does not need to be done, a 200 OK is fine as well (as the request was processed as expected), but perhaps as 204 No Content is better, as it suggests the request is fulfilled, but no change of the view is necessary. i'm leaning towards believing this is more inline with the ajax call as well, as you can ignore the response body (as there is none) and operate on the status (204? update UI to say "no changes were necessary" or similar)

Oracle Check Constraint

I've been struggling with this check constraint for a few hours and was hoping someone would be kind enough to explain why this check constraint isn't doing what I think it should be doing.

ALTER TABLE CLIENTS
add CONSTRAINT CHK_DISABILITY_INCOME_TYPE_ID CHECK ((IS_DISABLED IS NULL AND DISABILITY_INCOME_TYPE_ID IS NULL) OR (IS_DISABLED = 0 AND DISABILITY_INCOME_TYPE_ID IS NULL) OR (IS_DISABLED = 1));

Essentially, you must be disabled to collect disability income. It appears as though the first part of this check constraint (IS_DISABLED IS NULL AND DISABILITY_INCOME_TYPE_ID IS NULL) is not enforced (see below).

The available values for DISABILITY_INCOME_TYPE_ID are 1 and 2, which is enforced via foreign key. Both IS_DISABLED and DISABILITY_INCOME_TYPE_ID can be null.

-- incorrectly succeeds (Why?)
INSERT INTO CLIENTS (IS_DISABLED, DISABILITY_INCOME_TYPE_ID) VALUES (null, 1);
INSERT INTO CLIENTS (IS_DISABLED, DISABILITY_INCOME_TYPE_ID) VALUES (null, 2);

-- correctly fails
INSERT INTO CLIENTS (IS_DISABLED, DISABILITY_INCOME_TYPE_ID) VALUES (0, 1);
INSERT INTO CLIENTS (IS_DISABLED, DISABILITY_INCOME_TYPE_ID) VALUES (0, 2);

-- correctly succeeds
INSERT INTO CLIENTS (IS_DISABLED, DISABILITY_INCOME_TYPE_ID) VALUES (0, null);
INSERT INTO CLIENTS (IS_DISABLED, DISABILITY_INCOME_TYPE_ID) VALUES (1, 1);
INSERT INTO CLIENTS (IS_DISABLED, DISABILITY_INCOME_TYPE_ID) VALUES (1, 2);
INSERT INTO CLIENTS (IS_DISABLED, DISABILITY_INCOME_TYPE_ID) VALUES (1, null);
INSERT INTO CLIENTS (IS_DISABLED, DISABILITY_INCOME_TYPE_ID) VALUES (null, null);

Thanks for your help, Michael

From stackoverflow
  • Try using NVL in the check condition.

  • I'm not sure why the compound check isn't working, but this works:

    ALTER TABLE CLIENTS ADD CONSTRAINT CHK_1 CHECK (IS_DISABLED = 0 AND DISABILITY_INCOME_TYPE_ID IS NULL)
    
    ALTER TABLE CLIENTS ADD CONSTRAINT CHK_2 CHECK (IS_DISABLED IS NULL AND DISABILITY_INCOME_TYPE_ID IS NULL)
    
    ALTER TABLE CLIENTS ADD CONSTRAINT CHK_3 CHECK (IS_DISABLED = 1)
    

    Regards K

  • While I do not have Oracle, I did a quick test with PostgreSQL and your first example (IS_DISABLED being NULL and DISABILITY_INCOME_TYPE_ID being 1):

    postgres=> select (null is null and 1 is null);
     ?column?
    ----------
     f
    (1 registro)
    
    postgres=> select (null is null and 1 is null) or (null = 0 and 1 is null);
     ?column?
    ----------
     f
    (1 registro)
    
    postgres=> select (null is null and 1 is null) or (null = 0 and 1 is null) or (null = 1);
     ?column?
    ----------
    
    (1 registro)
    

    Here we see clearly that, in this case, your expression (at least on PostgreSQL) returns NULL. From the manual,

    [...] Expressions evaluating to TRUE or UNKNOWN succeed. Should any row of an insert or update operation produce a FALSE result an error exception is raised and the insert or update does not alter the database. [...]

    So, if Oracle behaves the same as PostgreSQL, the check constraint would pass.

    To see if this is the case, avoid the NULL shenanigans by explicily checking for it and see if it works:

    CHECK ((IS_DISABLED IS NULL AND DISABILITY_INCOME_TYPE_ID IS NULL)
        OR (IS_DISABLED IS NOT NULL AND IS_DISABLED = 0 AND DISABILITY_INCOME_TYPE_ID IS NULL)
        OR (IS_DISABLED IS NOT NULL AND IS_DISABLED = 1));
    
    BacMan : I'm not sure your answer is correct yet, but your explanation was extremely helpful in me solving this problem on my own. This whole time I was evaluation null = 1 as false, when in fact Oracle and Postgres evaluate this expression as unknown. Major difference.
  • This solution works.

    CHECK
    ((IS_DISABLED IS NULL AND NVL(DISABILITY_INCOME_TYPE_ID, 0) = 0)
    OR (IS_DISABLED = 0 AND NVL(DISABILITY_INCOME_TYPE_ID, 0) = 0) 
    OR (IS_DISABLED IS NOT NULL AND IS_DISABLED = 1));
    

How to know (in GCC) when given macro/preprocessor symbol gets declared?

Suppose I have #define foo in various header files. It may expand to some different things. I would like to know (when compiling a .cc file) when a #define is encountered, to what it will expand, it which file it is and where it got included from.

Is it possible? If not, are there any partial solutions that may help?

Feel free to add comments with clarification requests.

Edit: current answers seem to concentrate on the case when there is one #define and I just want to jump to definition or know what the definition is. That's the simple case and yes, your solutions work. But when I have the same #define in different files, and want to know which one kicks in first, none of these techniques is useful. Okay, I actually used #warning carefully to find the right place. But this requires much work.

From stackoverflow
  • Use #warning. It's described here.

  • Use -E :

    # shows preprocessed source with cpp internals removed
    g++ -E -P file.cc
    # shows preprocessed source kept with macros and include directives 
    g++ -E -dD -dI -P file.cc
    

    The internals above are line-markers for gcc which are kinda confusing when you read the output. -P strips them

     -E  Stop after the preprocessing stage; do not run the compiler proper.  
         The output is in the form of preprocessed source code, which is sent to the 
         standard output.
    
         Input files which don't require preprocessing are ignored.

    Note: comments correctly complain this is only a partial solution. It won't tell you when a macro will be replaced. It shows you the preprocessed source, which can be helpful anyway.

    Robert Gamble : This works on entire files, not individual #defines, and it doesn't provide the rest of the information @phjr was looking for.
    Johannes Schaub - litb : Indeed, it's only a partial solution :)
  • for the "to what it will expand" I use the -E switch in gcc which gives the preprocessed output. But there is no backtrace which macro came where from (or if there was a macro at all).

    Another option you might use is -g3, this adds debug information regarding the macros, i.e. you can later see in your debugger the definition of each macro.

  • It wont help you find where it was defined but you can see the definition for a particular file by using the -E -dM flags

    g++ -E -dm file.cpp | grep MACRO
    
  • A Good IDE can do this for you on demand via some form of jump to definition.

  • I would like to know (when compiling a .cc file) when a #define is encountered,

    I know a solution to that. Compile the file with the symbol already defined as illegal C++ code (the article linked to uses '@'). So, for GCC you would write

    gcc my_file.c -Dfoo=@
    

    to what it will expand, it which file it is and where it got included from.

    If you use the trick Raymond Chen suggests, the compiler may tell you where the "conflicting" definition came from, and may give you a list of how it got included. But there's no guarantee. Since I don't use macros (I prefer const and enum) I can't say if GCC is one of the smarter compilers in this regard. I don't believe the C or C++ standards say anything about this, other than once the preprocessor runs you lose all sorts of useful information.

How do you overcome 'Analysis Paralysis'

Time and again I find when researching something completely unknown, I go down so many bypaths that lead nowhere. For example, I was searching for a UI Automation Test tool... Stumbled onto Project White... moved to UI Automation Toolkit... to a Bugslayer article wrapper for the same... thought it would be a good idea to write one of my own... In the end nothing got done except may be I'm enriched by the knowledge. :)

It all started with automating a Smoke Test... which could be done easily using QTP which my company has licenses of (but no one at the moement to write the cases. BTW, I do not have access to it).

I'm sure this is a road well traversed by my fellow journeymen.

Advise?

From stackoverflow
  • Make a little project to test out how well each of a few different ideas works, as some software will have a trial period so you could test it out. See how well various software performs and at the end consider writing a recommendation, either why they all do not meet your minimum requirements or which of those that do appears to be the best value for the business to buy and change processes to use.

    I went through something like this when testing out various AJAX frameworks a few years ago for an ASP.Net web application just before Microsoft came out with their own implementation and spent a couple of weeks trying out a few different ideas and chose one in the end and did the integration which was rather simple as I had already spent some hours toying with it.

    Vyas Bharghava : This is so true... I do agree that the time invested usually is well spent
    1. Be clear about your goal
    2. Give yourself a deadline
    3. Like somebody else said, write small programs to test ideas.
    4. Make decisions quickly.

    Don't worry about bad decisions,just trust yourself to figure them a solution when you get there.

    Vyas Bharghava : #2 works like a gun to your head :)
  • Start off by reading as much as you can on the subject by sources you respect, acknowledging that there is a lot of BS out there, but a commonality of positive experiences is a good sign. Then narrow it way down to just a few possibilities and then write small suites of test code to prototype the concept, test the API, etc. Then if nothing pre-packaged will work, consider writing it yourself. The main thing is to try to narrow down the field of possibilities with the initial research so you don't get bogged down trying too many things.

  • Let's be also aware of the fact that "analysis paralysis" quite often means the same as procrastination. An sophisticated way of procrastination sort of. It's a handy excuse to keep us from starting with the actual work. At least that what I have felt a couple of times. If I had used half of my research time for trying to code something, probably half of the application would have been written already.

    That said I have to add that social scientists recently discovered a strong link between procrastination and perfectionism. So it's not the lazy ppl who postpone, it's those who want to end up with the perfect result.

    This means one way of overcoming analysis paralysis shoud be lowering our expectations towards ourselves.

    Vyas Bharghava : And it may be the closest applicable to my situation. This and time. Work to a deadline and people tend to snap out the 'paralysis' real quick...there's a fine line...If it's a genuine unknown we may be better off spending time.You know "Sharpen your axe before felling a tree" vs. "Code like hell".
    tharkun : yes, that's true, studies also show that experts (with real expertise not just a function) use considerably more time for planning than novices.

Best Practices for Building a SSO System

I am looking to build a Single-signon system for a couple web apps that used form based authentication.

They way that I envision it is that my sso system would handle authentication against active directory pass the verification through to the desired web app when ever the used clicked a link through my sso portal.

What might be the best way to build something like this?

From stackoverflow
  • Check out JA-SIG CAS. Even if you build your own, you should look at their architecture for ideas on how to implement.

  • When a user accesses an application URL without a session cookie, he is redirected to the SSO system. He then logs into the SSO form, which then redirects him back to your app with a ticket ID that you can look up in the SSO system to get cridentials.

    Also, take a look at Crowd and CAS.

Need help adding files to a subversion repository.

Is there a way to recursively add all "non added" files to SVN?

I want to do something like:

svn add trunk/

to add all new files, without getting:

svn: warning: 'trunk' is already under version control

If this is not possible, is there a way to get a list of all files that are not under version control in the directory?

Note that I am only interested in using the command line.

Thanks.

From stackoverflow
  • Use the --force option.

    Example:

    svn add * --force
    
  • If the first column of the output from the svn status command is a question mark, then that file is not under version control. See this page on the svn status command for more information.

  • svn status | awk '/\?/ {print $2}' | xargs svn add

Process Affinity

Is it possible to programmatically either via the win32API (python) or via a command line option set the processor affinity for a process?

From stackoverflow
  • In C/C++ you can use SetThreadIdealProcessor(). I have no idea if this is accessible in python.

    DWORD processorId = 0;
    SetThreadIdealProcessor(GetCurrentThread(),processorId);
    
  • Given a process handle, you can call the SetProcessAffinityMask Win32 API function to set the processor affinity for another process. Note that all the usual security restrictions are in effect, that is, you will probably only be able to change the process affinity for a process you own.

How do I get a C# WebBrowser control to show jpeg files (raw)?

Hi. Does anyone know in .Net 2.0 - .Net 3.5 how to load a jpeg into a System.Windows.Forms.WebControl as a byte-array and with the right mimetypes set so it will show?

Something like:

webBrowser1.DocumentStream = new MemoryStream(File.ReadAllBytes("mypic.jpg"));
webBrowser1.DocumentType = "application/jpeg";

The webBrowser1.DocumentType seems to be read only, so I do not know how to do this. In general I want to be able to load any kind of filesource with a mimetype defined into the browser to show it.

Solutions with writing temp files are not good ones. Currently I have solved it with having a little local webserver socket listener that delivers the jpeg I ask for with the right mimetype.

From stackoverflow
  • You cannot do it. You cannot stuff images into Microsoft's web-browser control.

    The limitation comes from the IWebBrowser control itself, which .NET wraps up.

  • Noooo. There MUST be a way to get this to work. :(
    There isn't any hacky way of doing this? I seldom give up on such dilemmas and tend to find a workaround, but I have tried digging a few times on this one over a long period. But nada.

    I do not want to load off from disk, and I do not want to serve a host for the browser to get its data from.

    The reason I am using the browser is to show pictures or other files that have a IE plugin viewer, like Word or PDF or other viewers like that.

    Is there any way to maybe use the plugins directly, bypassing the need for the IE host (webcontrol)? If so, that would be a solution.

  • If you want a total hack, try having your stream be the HTML file that only shows your picture. You lose your image byte stream and will have to write the image to disk.

  • But then we are back to writing to disk. I want it all to be done in-memory. If there a sensitive data to be shown, I do not want it to be written to disk in any way. Neither sniffable as network traffic.

  • I do not know whether the WebBrowser .NET control supports this, but RFC2397 defines how to use inline images. Using this and a XHTML snippet created on-the-fly, you could possibly assign the image without the need to write it to a file.

    Image someImage = Image.FromFile("mypic.jpg");
    
    // Firstly, get the image as a base64 encoded string
    ImageConverter imageConverter = new ImageConverter();
    byte[] buffer = (byte[])imageConverter.ConvertTo(someImage, typeof(byte[]));
    string base64 = Convert.ToBase64String(buffer, Base64FormattingOptions.InsertLineBreaks);
    
    // Then, dynamically create some XHTML for this (as this is just a sample, minimalistic XHTML :D)
    string html = "<img src=\"data:image/" . someImage.RawFormat.ToString() . ";base64, " . $base64 . "\">";
    
    // And put it into some stream
    using (StreamWriter streamWriter = new StreamWriter(new MemoryStream()))
    {
        streamWriter.Write(html);
        streamWriter.Flush();
        webBrowser.DocumentStream = streamWriter.BaseStream;
        webBrowser.DocumentType = "text/html";
    }
    

    No idea whether this solution is elegant, but I guess it is not. My excuse for not being sure is that it is late at night. :)

    References:

    liggett78 : I believe IE used to have no support for data:image stuff.
    hangy : I ran a small test yesterday (http://browsershots.org/http://aktuell.de.selfhtml.org/artikel/grafik/inline-images/) which shows that no IE but IE8 does. :( However, I have no clue what kind of rendering engine that WebBrowser control uses. :)
  • To bad you can not set: webBrowser.DocumentType = "text/html";

    as it is read only. I'll check out the inline thing though.

    Hmm but an idea emerges.

    Since it can not be set, how about opening up X instances of the control, one for each mimetype/plugin that I want to show (opening a dummy file from disk). That way the mimetype will be changed, and maybe when loading the target file into .DocumentStream, it'll show correctly?

    Will test out as soon as I can.

  • You have to implement an async pluggable protocol, e.g. IClassFactory, IInternetProtocol... Then you use CoInternetGetSession to register your protocol. When IE calls your implementation, you can serve your image data from memory/provide mime type.

    It's a bit tedious, but doable. Look at IInternetProtocol and pluggable protocols documentation on MSDN.

  • Ah. Thanks. Will definately check that one out. Will comment back on this thread when I've done something.

  • I have done what liggett78 suggests, and would just like to add that allthough it takes some work, it works great once in place.

  • Thanks. It was EXACTLY a way to solve the problem.

    Sample solution in C# here that works perfectly:

    http://www.codeproject.com/KB/aspnet/AspxProtocol.aspx

    All is done inmemory. Since the "talking" component has to be GACed and registered to work, I'll just create it in a static way with eventhandlers so that I can use it from my running application. Make it generic so I can use it in many ways.

    It is strange how many times I have seen others asking the same question I have, and all have been let down with no solution. Hope this thread gets up there so people find the solution!

  • IE only support 32KB for inline images in base64 encoding, so not a good solution.

Vb6: Separating Tab Delimited Text

I have a file with several thousand rows and several columns separated with tabs What I'd like to do is loop through each individually, Drop the columns into an array so that I can place them in another application individually, then move onto the next line. Unfortunately I got about as far as this:

Open mytextfile.txt For Input As #FileHandle
 Do While Not EOF(FileHandle)
 Line Input #FileHandle, IndividualLine
 StringToBreakup = IndividualLine
Loop

So how would I go about breaking individual line up into an array

From stackoverflow
  • Use the split command

    Dim StringArray as Variant
    
    Open mytextfile.txt For Input As #FileHandle
     Do While Not EOF(FileHandle)
     Line Input #FileHandle, IndividualLine
     StringToBreakup = IndividualLine
    
     StringArray = Split(StringToBreakup, CHR(9)) 
    
     Process array here...
    
    Loop
    
  • Dim str() as String
    
    Open mytextfile.txt For Input As #FileHandle
        Do While Not EOF(FileHandle)
        Line Input #FileHandle, IndividualLine
        str = Split(IndividualLine, vbTab)
        Debug.Print str(0)  'First array element
    Loop
    

    To clarify: I would avoid the use of Variants and use vbTab.

New to functional programming

Hey, I'm really new to Haskell and have been using more classic programming languages my whole life. I have no idea what is going on here. I'm trying to make a very simple Viterbi algorithm implementation, but for only two states (honest and dishonest casino)

I have a problem where I want to address my array, but I don't think I'm getting types right. That or I'm making a new array each time I try to address it - equally stupid. Look at myArray, te infix, and dynamicProgram especially, PLEASE. Pretty pretty please

 Code


import Array
import Char

trans :: Int -> Int -> Double -> Double -> Double
trans from x trans11 trans21 =
    if (from == 1) && (x == 1)
        then trans11
    else if (from == 1) && (x == 2) 
        then (1-trans11)
    else if (from == 2) && (x == 1) 
        then trans21
    else (1-trans21)

em :: Char -> [Double] -> Double
em c list = list!! a
    where a = digitToInt c

intToChar :: Int -> Char
intToChar n | n == 1 = '1'
            | n == 2 = '2'

casino :: Char -> Int -> Int -> [Double] -> [Double] -> Double -> Double -> Double
casino seqchar 1 y em1 em2 t1 t2= 0.5 * (em seqchar em1)
casino seqchar 2 y em1 em2 t1 t2= 0.5 * (em seqchar em2)
casino seqchar x y em1 em2 t1 t2= maximum[ (1 @@ y-1)*(em seqchar em1)*(trans 1 x t1 t2),(2 @@ y-1)*(em seqchar em2)*(trans 2 x t1 t2) ]

dynamicProgram :: [Char] -> (Char -> Int -> Int -> [Double] -> [Double] -> Double -> Double -> Double) -> [Double] -> [Double] -> Double -> Double -> (Array a b)
dynamicProgram string score list1 list2 trans11 trans21 = myArray 1 len
              [score (string!!y) x y list1 list2 trans11 trans21 | x  Int -> [Double] -> Array a b
myArray startIndex endIndex values = listArray (startIndex,startIndex) (endIndex,endIndex) values

traceback :: [Char] -> Int -> Int -> [Double] -> [Double] -> Double -> Double -> [Char]
traceback s 1 0 em1 em2 t1 t2 = []
traceback s 2 0 em1 em2 t1 t2 = []
traceback s x y em1 em2 t1 t2 | x@@y == (1 @@ y-1)*(em (s!!y) em1)*(trans 1 x t1 t2) = '1' : traceback s 1 (y-1) em1 em2 t1 t2
                            | x@@y == (2 @@ y-1)*(em (s!!y) em1)*(trans 2 x t1 t2) = '2' : traceback s 2 (y-1) em1 em2 t1 t2 

answer :: [Char] -> [Double] -> [Double] -> Double -> Double -> [Char]
answer string list1 list2 t1 t2 = reverse $ maxC : traceback string max end list1 list2 t1 t2 $ dynamicProgram casino string list1 list2 t1 t2
   where
      end = (length string) + 1
      max | maximum (1@@end) (2@@end) == 1@@end = 1
      | maximum (1@@end) (2@@end) == 2@@end = 2
      maxC = intToChar max

infix 5 @@
(@@) i j = myArray ! (i, j)

main = do
    putStrLn "What is the sequence to test?"
    seq  state 1 transmission probability?"
    trp1  state 2 transmission probability is " ++ (1-trp1)
    putStrLn "What is the state 2 -> state 1 transmission probability?"
    trp2  state 2 transmission probability is " ++ (1-trp2)
    putStrLn "I assume that the prob of starting in either state is 1/2.  Go!"
    answer seq st1 st2 trp1 trp2
From stackoverflow
  • I copied the code from the edit window (something in stackoverflow's parser is eating part of the code) and tried it on ghci, which found several errors. The first error was:

    foo.hs:34:71:
        Couldn't match expected type `[e]' against inferred type `(a, b)'
        In the second argument of `listArray', namely
            `(endIndex, endIndex)'
        In the expression:
            listArray (startIndex, startIndex) (endIndex, endIndex) values
        In the definition of `myArray':
            myArray startIndex endIndex values
                      = listArray (startIndex, startIndex) (endIndex, endIndex) values
    

    The type of listArray is:

    listArray :: (Ix i) => (i, i) -> [e] -> Array i e
            -- Defined in GHC.Arr
    

    It takes a tuple with the lower and upper bounds and the list. So, the correct expression would probably be:

    listArray (startIndex, endIndex) values
    

    And the type of myArray is not Array a b, it is Array Int Double.

    The second error was:

    foo.hs:43:44:
        Couldn't match expected type `a -> b'
               against inferred type `[Char]'
        In the first argument of `($)', namely
            `maxC : (traceback string max end list1 list2 t1 t2)'
        In the second argument of `($)', namely
            `(maxC : (traceback string max end list1 list2 t1 t2))
           $ (dynamicProgram casino string list1 list2 t1 t2)'
        In the expression:
              reverse
            $ ((maxC : (traceback string max end list1 list2 t1 t2))
             $ (dynamicProgram casino string list1 list2 t1 t2))
    

    $ is right associative, so the rightmost $ is looked at first. The first parameter to it must be a function, which it will call with its rightmost parameter as the argument. Here, however, it is a list.

    The third error was:

    foo.hs:51:11:
        Couldn't match expected type `Array i e'
               against inferred type `Int -> Int -> [Double] -> Array a b'
        In the first argument of `(!)', namely `myArray'
        In the expression: myArray ! (i, j)
        In the definition of `@@': @@ i j = myArray ! (i, j)
    

    myArray is not an array; it is a function which takes three parameters and constructs an array based on them.

    Here probably your being used to more traditional imperative languages is tripping you. In a traditional imperative language, it would be natural to have a global myArray variable which you then can access from the middle of your program. In Haskell however, absent more advanced trickery which you should not try while you are a beginner, a "global" variable is more like a constant value (which is lazily computed on first use, but which as far as you care could have been computed by the compiler while generating your executable). You cannot initialize it from values you read as input.

    Your best way around that is to pass the array through the program, which unfortunately will need several changes and negates the usefulness of your @@ operator. You can hide the passing of the array in several more advanced ways, but while learning it is best to be more explicit.

    The last error was:

    foo.hs:63:4:
        Couldn't match expected type `[a]' against inferred type `IO ()'
        In the first argument of `(++)', namely
            `putStrLn
               "I assume that the state 1 -> state 2 transmission probability is "'
        In the expression:
              (putStrLn
                 "I assume that the state 1 -> state 2 transmission probability is ")
            ++
              (1 - trp1)
        In a 'do' expression:
              (putStrLn
                 "I assume that the state 1 -> state 2 transmission probability is ")
            ++
              (1 - trp1)
    

    This has two errors: the one the compiler complained about is a precedence problem, as the compiler-added parenthesis readly show, and which can easily be fixed by correct application of either parenthesis or the $ operator. The other error, which you will find after you fix this one, is that you cannot concatenate a string and a number; you have to convert the number to a string.

    This was all without looking at the algorithm or even most of the code, just looking at the compiler errors. If you want a two-dimensional array, for instance, the correct expression for the first error would be:

    listArray ((startIndex, startIndex), (endIndex, endIndex)) values
    

    Now both bounds are tuples, and its type is Array (Int, Int) Double.

    Godeke : Very nicely done!
  • Thank you a lot Cesar. Then again, I think I figured out how I misinterpreted functional programming.

    Now I am just defining a score function that is recursive based on the value before, and define a base case. Then all I do is feed in the string to interpret and index = length, and it calculates the end state, from which I then traceback to the beginning. So yeah, I was just trapped in an imperative mindset

    aka no more arrays!

  • You can rewrite the trans-function like this:

    trans :: Int -> Int -> Double -> Double -> Double
    trans 1 1 trans11 trans21 =   trans11
    trans 1 2 trans11 trans21 = 1-trans11
    trans 2 1 trans11 trans21 =   trans21
    trans _ _ trans11 trans21 = 1-trans21