Thursday, March 24, 2011

What is the best way to repeatedly execute a function every x seconds in Python?

I want to repeatedly execute a function in Python every 60 seconds forever (just like an NSTimer in Objective C). This code will run as a daemon and is effectively like calling the python script every minute using a cron, but without requiring that to be set up by the user.

In this question about a cron implemented in Python, the solution appears to effectively just sleep() for x seconds. I don't need such advanced functionality so perhaps something like this would work

while True:
    # Code executed here
    time.sleep(60)

Are there any foreseeable problems with this code?

From stackoverflow
  • Use the sched module

    import sched, time
    s = sched.scheduler(time.time, time.sleep)
    def do_something(sc): 
        print "Doing stuff..."
        # do your stuff
        sc.enter(60, 1, do_something, (sc,))
    
    s.enter(60, 1, do_something, (s,))
    s.run()
    
    Baishampayan Ghose : The sched module is for scheduling functions to run after some time, how do you use it to repeat a function call every x seconds without using time.sleep()?
    nosklo : @Baishampayan: Just schedule a new run.
    Alabaster Codify : Kronos, based on sched, offers a higher level interface: http://www.razorvine.net/download/kronos.py Used by TurboGears.
  • The main difference between that and cron is that an exception will kill the daemon for good. You might want to wrap with an exception catcher and logger.

  • You might want to consider Twisted which is a python networking library that implements the Reactor Pattern.

    from twisted.internet import task
    from twisted.internet import reactor
    
    timeout = 60.0 # Sixty seconds
    
    def doWork():
        #do work here
        pass
    
    l = task.LoopingCall(doWork)
    l.start(timeout) # call every sixty seconds
    
    reactor.run()
    

    While "while True: sleep(60)" will probably work Twisted probably already implements many of the features that you will eventually need (daemonization, logging or exception handling as pointed out by bobince) and will probably be a more robust solution

    Baishampayan Ghose : I knew Twisted could do this. Thanks for sharing the example code!

Best Practice for Subversion and Image Uploads

Running an ASP.Net website and using TinyMCE for content management. Users need to be able to upload images for content. The images will be shown in website content (news, articles, etc) but won't otherwise be referenced by anything aside from their content.

Should these images be under version control (svn), or should the upload directory just be passed around when needed (backup, testing, etc)? If they should be versioned, what's the best way to do this?

From stackoverflow
  • The content is not a part of the application just the instance you have running so it should be added to your backup not your SVN repository. I only add images to SVN if there are apart of the application it self.

  • There's no reason not to store images in your version management repository, and it makes a great tool for handling changes-of-mind, who-did-what and how things evolve over time.

    The only thing you need to do is ensure that you have the needs-lock property on these files - last thing you want is for users to get a file, modify it, then attempt to check it in only to find someone else has modified it already. Normally this is not a problem due to the merge ability of SVN, but its a bit tricky to merge image files. So, put the needs-lock property on and only 1 user can modify an image at a time.

    ah.. just read it again, do you mean user-uploaded content that is not part of your website? In which case, why do you need those images at all - back them up of course, but otherwise do you need them at all anywhere other than on the live website?

What is the best way to represent a many-to-many relationship between records in a single SQL table?

I have a SQL table like so:

Update: I'm changing the example table as the existing hierarchical nature of the original data (State, Cities, Schools) is overshadowing the fact that a simple relationship is needed between the items.

entities
id      name               
1       Apple     
2       Orange            
3       Banana             
4       Carrot                
5       Mushroom

I want to define two-way relationships between these entities so a user viewing one entity can see a list of all related entities.

The relationships are defined by an end user.

What is the best way to represent these relationships in the database and subsequently query and update them?

One way as I see it...

My instinct says a relationship table like so:

entity_entity
entity_id_a       entity_id_b
1                 2
5                 1
4                 1
5                 4
1                 3

That being the case, given a supplied entity_id of 4, how would one get all related records, which would be 1 and 5?

Likewise a query of entity_id = 1 should return 2, 3, 4, and 5.

Thanks for your time and let me know if I can clarify the question at all.

From stackoverflow
  • I think the structure you have suggested is fine.

    To get the related records do something like

    SELECT related.* FROM entities AS search 
    LEFT JOIN entity_entity map ON map.entity_id_a = search.id
    LEFT JOIN entities AS related ON map.entity_id_b = related.id
    WHERE search.name = 'Search term'
    

    Hope that helps.

    Bill Karwin : What if my search term matches an entity whose id occurs only in entity_id_b in the map?
    Bill Karwin : In other words, your query works only if every relationship is stored twice, reverse. E.g. (1,4) and (4,1).
  • select * from entities
    where entity_id in 
    (
        select entity_id_b 
        from entity_entity 
        where entity_id_a = @lookup_value
    )
    
  • Define a constraint: entity_id_a < entity_id_b.

    Create indexes:

    CREATE UNIQUE INDEX ix_a_b ON entity_entity(entity_id_a, entity_id_b);
    CREATE INDEX ix_b ON entity_entity(entity_id_b);
    

    Second index doesn't need to include entity_id_a as you will use it only to select all a's within one b. RANGE SCAN on ix_b will be faster than a SKIP SCAN on ix_a_b.

    Populate the table with your entities as follows:

    INSERT
    INTO entity_entity (entity_id_a, entity_id_b)
    VALUES (LEAST(@id1, @id2), GREATEST(@id1, @id2))
    

    Then select:

    SELECT entity_id_b
    FROM entity_entity
    WHERE entity_id_a = @id
    UNION ALL
    SELECT entity_id_a
    FROM entity_entity
    WHERE entity_id_b = @id
    

    UNION ALL here lets you use above indexes and avoid extra sorting for uniqueness.

    All above is valid for a symmetric and anti-reflexive relationship. That means that:

    • If a is related to b, then b is related to a

    • a is never related to a

    GloryFish : This approach is working very well in practice. Thank you kindly.
  • I can think of a few ways.

    A single pass with a CASE:

    SELECT DISTINCT
        CASE
            WHEN entity_id_a <> @entity_id THEN entity_id_a
            WHEN entity_id_b <> @entity_id THEN entity_id_b
        END AS equivalent_entity
    FROM entity_entity
    WHERE entity_id_a = @entity_id OR entity_id_b = @entity_id
    

    Or two filtered queries UNIONed thus:

    SELECT entity_id_b AS equivalent_entity
    FROM entity_entity
    WHERE entity_id_a = @entity_id
    UNION
    SELECT entity_id_a AS equivalent_entity
    FROM entity_entity
    WHERE entity_id_b = @entity_id
    
  • The link table approach seems fine, except that you might want a 'relationship type' so that you know WHY they are related.

    For example, the relation between Raleigh and North Carolina is not the same as a relation between Raleigh and Durham. Additionally, you may want to know who is the 'parent' in the relationship, in case you were driving conditional drop-downs. (i.e. You select a State, you get to see the cities that are in the state).

    Depending on the complexity of your requirements, the simple setup you have right now may not be sufficient. If you simply need to show that two records are related in some way, the link table should be sufficient.

    GloryFish : I see what you are getting at. In this case we are specifically not representing a hierarchy. There will only ever be one state in this system and the relationships won't be used for a drill-down style navigation.
  • I already posted a way to do it in your design, but I also wanted to offer this separate design insight if you have some flexibility in your design and this more closely fits your needs.

    If the items are in (non-overlapping) equivalence classes, you might want to make equivalence classes the basis for the table design, where everything in class is considered equivalent. The classes themselves can be anonymous:

    CREATE TABLE equivalence_class (
        class_id int -- surrogate, IDENTITY, autonumber, etc.
        ,entity_id int
    )
    

    entity_id should be unique for a non-overlapping partition of your space.

    This avoids the problem of ensuring proper left- or right-handed-ness or forcing an upper-right relationship matrix.

    Then your query is a little different:

    SELECT c2.entity_id
    FROM equivalence_class c1
    INNER JOIN equivalence_class c2
        ON c1.entity_id = @entity_id
        AND c1.class_id = c2.class_id
        AND c2.entity_id <> @entity_id
    

    or, equivalently:

    SELECT c2.entity_id
    FROM equivalence_class c1
    INNER JOIN equivalence_class c2
        ON c1.entity_id = @entity_id
        AND c1.class_id = c2.class_id
        AND c2.entity_id <> c1.entity_id
    
    Bill Karwin : Nice! You can also test c2.entity_id <> c1.entity_id, instead of c2.entity_id <> @entity_id. That way you don't have to pass the @entity_id parameter twice.
    Cade Roux : I assumed it would be a stored procedure, but yes, that would be equivalent for the parameterized ad hoc query devotees.
  • My advice is that your intial table design is bad. Do not store different types of things in the same table. (First rule of database design, right up there with do not store multiple pieces of information in the same field). This is much harder to query and will cause significant performance problems down the road. Plus it would be a problem entering the data into the realtionship table - how do you know what entities would need to be realted when you do a new entry? It would be much better to design properly relational tables. Entity tables are almost always a bad idea. I see no reason at all from the example to have this type of information in one table. Frankly I'd have a university table and a related address table. It would easy to query and perform far better.

  • Based on your updated schema this query should work:

    select if(entity_id_a=:entity_id,entity_id_b,entity_id_a) as related_entity_id where :entity_id in (entity_id_a, entity_id_b)
    

    where :entity_id is bound to the entity you are querying

VBScript Tutorials / Reference

I'm trying to write a script in VB, but I'm not finding any good tutorials or references to start from. It's something relatively simple (playing a song whenever it reaches a certain time), but most of the stuff that I'm finding is more geared towards embedding the scripts into webpages.

Do you have any suggestions for some good reference sites?

From stackoverflow

Static Port Assignment in SQL Server 2008 Does not save

I have two named instances of SQL Server 2008 and am trying to set static ports for each instance. I open the SQL Server Configuration Manager _> expand "SQL Server Network Configuration" -> click the instance I want to change -> and select TCP/IP.

From here any configuration changes that I make are not persisted after I hit OK. I've tried setting the "TCP Dynamic Ports" option to blank, and setting my desired port # under the "TCP Port" option under the IPALL section but each time when I return to the configuration screen the changes no longer appear.

I've tried bouncing the service and that doesn't help either.

Does anyone have any thoughts as to what is going on here?

From stackoverflow
  • Did you restart the SQL Server after you made the change?

  • Are you changing the TCP Port under a specific NIC or under the IPAll section?

    You should be doing it under the IPAll (at the very bottom).

    Bart : IPALL is what I'm using
  • Turns out the problem was that my account wasn't a local admin on the machine. I'm a sysadmin on the SQL Server so I assumed it would work.

    The frustrating thing is that the interface didn't alert me that i didn't have permissions it simply told me that my changes were saved and proceeded to ignore them.

    thanks to all those who answered though!

ajax and accessibility

1) How important is it for the site to be accessible without javascript? I'm using a lot of ajax. I converted most of the site to be accessible without js, but the effort involved left me wondering if it was worth it.

2) What are the sort of scenarios (that occur fairly often) in which javascript might be turned off? (apart from people being paranoid and turning off js)

I'm developing the website that caters exclusively to the students in my university.I know that most (99%) of the users of the site will access it through a normal web-browser (no screen readers, or mobiles etc.)

I see that even large sites like digg, reddit simply stop working when I turn off js, without any attempt to provide html only access. Even in SO, it isn't possible to vote or view comments without javascript (though there are some nice error messages shown)

Edit: SEO is not a major concern, since it a very niche website, and marketing is done by other means. And right now, it has been indexed and is the first result when searching for the site name.

From stackoverflow
  • The most common non-accessible non-paranoid reason why JS would be disabled is when search engine bots come to index the content. You need to be able to handle that if you want to be listed properly on search engines.

    EDIT for your edit: Fair enough. It really depends on your site's features. If it's a primarily informational site, then requiring JS is absurd. If it's more of a web application, then not requiring makes it much harder to use. Make the informational parts (if any) as accessible as possible, and do what you want with the rest.

  • blackberrys and other portable web browsers often have javascript off by default

    annakata : most mobile devices full stop in fact
    spoulson : Except the iPhone. Mine handles a great number of AJAX enabled sites just fine. Caveat is that too much javascript processing can cause browser crashes.
  • I asked a very related question here. Eventhough I say there that the main aim was not accessibility but ease of use, it might be an interesting read for you. I was one of the developers for my university websites and our lead was of the opinion that all websites in domains like education, non-profit organizations, govt organizations etc should be 100% accessible. Ideally you would want the pages to "work" without your CSS and client side code (JS/VBScript). We analyzed our pages using this and/or this to check our sites for accessibility.

    trex279 : It's not officially a university site (fixed wording in question).
  • The Importance of Being Accessible

    It may be very important for your website to be highly accessible, especially if the site is being built for an organization which is subsidized by federal dollars.

    The Rehabilitation Act was ammended in 1998, and now requires Federal agencies to make their electronic and information technology accessible to people with disabilities.

    There are similar laws applying to e-commerce sites, applying to the online storefronts of traditional retailers.

    You can look into Secion 508 for more info, but the main idea is that partial page refreshes won't be read by modern screen readers, and if your site needs to be accessible, the extra effort is required, and certainly worth your effort.

    Many web frameworks are still in use which did not anticipate ajax, and it can require a lot of work-arounds to make things accessible. Still, it's really the best thing to do, even if you are developing a private website.

    Here are a couple of other articles which deal with the topic:

    Users without javascript

    As far as "turning off" javascript, users don't do this anywhere nearly as often as they did 5 years ago, though some still may. This will not likely be the case with your audience, and it's generally not considered the major concern it once was.

    These days, the real concern is just client support. All modern browsers support enough javascript to allow you to do your work. It's the alternative clients, like the accessibility devices you mentioned, which may add requirements to your design.

    If some of your audience works in a security-sensitive environment (government agencies, etc.), it may still be mandated that javascript is turned off on their work machines. This is also becoming less and less of a problem as time goes on, though it's a more common case than the paranoia issue you mentioned.

    Of course, if you offer some support for those users, you won't have to worry about it.

  • It sounds like your site is going to be used in an educational environment; many countries have laws regarding the accessibility of sites in education. Beyond this, search engines and mobile browsers, as well as screen readers, will benefit from a standards compliant website.

    That's not to say you can't use Javascript, just that you should be careful. In an ideal world you should be using XHTML for your content, CSS for your style, and Javascript for your behavior. On the latter point see these two Wikipedia articles:

    roryf : +1 for Progressive Enhancement. Sounds like the OP has the wrong approach, you shouldn't build for Javascript and then make accessible, build for no-Javascript and then add it where appropriate.
  • It depends on your website.

    If you are developing a web application intended for desktop users only, lots of Javascript is probably ok. Otherwise, strongly consider making your site accessible to mobile users, impaired users, or simply paranoid users who turn Javascript off. One of the best strategies to follow is that of graceful degradation, in which users without Javascript can still interact with your site, just without the flashy features.

Implementing ACID

I am starting research on a project that will need to provide ACID semantics on its database.

Due to the nature of the data it is not suitable for storage in common off-the-shelf systems (relational or key-value).

What are some good resources on how to implement systems which must provide ACID semantics?

My typical Google search returns more information about system which already provide ACID semantics rather than how to go about implementing such systems.

From stackoverflow
  • ARIES is a popular algorithm for implementing an ACID database (e.g. SQL Server uses this algorithm).

    1. Wikipedia on ARIES
    2. The ARIES paper
    Andrew Rollings : Care to summarize here?
  • Timothy Leary - How to Operate Your Brain

    http://www.youtube.com/watch?v=SQq_XmhBTgg

    Cody Brocious : +1: Best answer I've seen in a long while. Kudos.
    Andrew Rollings : Not exactly helpful though :) (Unless the ensuing mind expansion allowed you to solve the problem).
    Cody Brocious : Vision quest coding should probably be reserved for compiler/OS development and reverse-engineering. I can't imagine that working out well for databases.
    rmeador : Wasn't Oracle developed primarily under the influence of mind-altering drugs?
  • If you know German, I'd recommend

    • Alfons Kemperer: Datenbanksysteme - Eine Einführung, ISBN 3486576909

    "Einführung", which means "introduction", is a gross understatment. The book has several chapters on how you would physically lay out the data, WAL (write ahead logging), serializable vs. non-serializable histories, restart after failures, and much more.

    I doubt, though, that you really want to write something like that. Do I need to remind you that in theory you can model any data-structure on top of the relational model?

    Eloff : Better to waste the flexibility of a relational database than 6 months of your life...

Should Exception Messages be Globalized

I'm working on a project and I'm just starting to do all the work necessary to globalize the application. One thing that comes up quite often is whether to globalize the exception messages, but ensuring that string.Format uses CultureInfo.CurrentCulture instead of CultureInfo.InvariantCulture. Additionally this would mean that exception messages would be stored in resource files that can be marked as culture-specific.

So the question is, should exception messages be globalized or should be be left in either the InvariantCulture or the author's country; in my case en-US.

From stackoverflow
  • If you are going to be the one to deal with the exceptions, then either leave them in a language you can understand, or give them codes so you can look them up in your native language.

  • IDEs should be able to help you externalize those strings.

    I'm a Java programmer, so I'm used to IntelliJ helping me create resource bundles for I18N of messages, labels, etc. Is there something analogous in .NET?

  • typically, I don't.

    Globalize strings that may be seen by a user, and you don't let your exception messages percolate up to the UI, right?

    Right? :)

    StingyJack : Yes Elliot (http://riiight.net/)
    Harry Steinhilber : +1 for having to go up against Jon :)
    MarkJ : +1 for saying the same as Jon S, but briefly
  • Exception messages should rarely be displayed directly to the user. You need to think of the consumer for each string. Obviously pieces of text in the user interface need internationalizing, but if an exception message is only going to be seen by support (or is going to be visible to the user and then emailed to support when they click a button) then where's the benefit of translating it?

    If you go too far, you could not only waste time and effort (and i18n can take a lot of effort) but you'll also make your support life harder as well. You really don't want to have to read log files written in a foreign language and translate them back to your native tongue.

    It makes sense for Microsoft to internationalize their exception messages, because they will be read by developers from all over the world - but unless you're multinational with developers in multiple countries who don't share a common language, I wouldn't translate message which are really meant for dev/support.

    Andrew Hare : +1 Much more practical approach I think.
    Outlaw Programmer : +1 but one 'reason' I've heard from a former employer is that a similar exception can be thrown by multiple methods. If they ever wanted to change the wording of an exception, they just need to edit the resource bundle in one spot. I thought it was stupid...
    Jon Skeet : @Outlaw: Yup, that's stupid - because it means if you need to change the wording for one site but not the other (which seems pretty likely to me) you've got much more work to do.
  • I assume by globalize, you mean i18n compliant which is usually called internationalize. Yes, internationalize all visible parts of the GUI, including diagnostic messages. The log file, which is where developers should go to get the real information such as the stack trace, should not be internationalized.

    StingyJack : In .Net, the namespaces are called "Globalization". We are stuck with whatever jargon MS imposes. =/

What function do I use in Sql Server 2005 to get the current datetime in utc?

I've used GetDate() in defaults for a long time.
What is the equivalent call, or logic in SQL 2005 for me to put in a default that will give me the current UTC time.

From stackoverflow
  • GETUTCDATE()
    
  • GETUTCDATE()

    The difference between GETDATE() and GETUTCDATE() is time zone number of the SQL Server machine.

LDAP Query Distinct

Hi, is it possible to only list distinct values with an LDAP query? I am getting duplicates of users for some reason but I only want them to be reported once, thank you for your help.

From stackoverflow
  • Linq to Active Directory might help you here:

    http://www.codeplex.com/LINQtoAD

  • I would be more concerned about why an LDAP query is returning users more than once, unless they exist more than once. That would seem to indicate a real issue in your LDAP source.

    That pretty much should not be happening.

What are the alternatives to Socket Programming?

I was browsing around for programming troubleshooters, and I was wondering what the alternate would be for Socket Programming; if there is one.

From stackoverflow
  • Assuming you have control over the protocol being used, you could go for a higher level of abstraction like .NET's Remoting or WCF.

    jro : Don't waste your time with Remoting. It's being deprecated.
    Cody Brocious : jro, Mono's WCF support is limited, so Remoting is the only choice if you care about Mono.
  • TCPClient and UDPClient abstract you from the underlying sockets to some degree. They're nice to work with.

  • There are a ton of communication methods and protocols depending on the communication environment and scenarios. Socket programming is great for high-performant, intranet communication. The downside is that you have to supply your own protocol and message structure since communication is binary. This isn't a very interoperable solution if that is what you are striving for. There is IPC (for process-to-process named pipes). For internet communications, you can use legacy ASMX web services, MSMQ (message queueing), or the newer WS-protocols.

    I recommend looking at WCF which is an SDK that unifies many of these communication technologies. WCF abstracts your business logic from the underlying plumbing. A single service can support binding to various transport technologies by only changing metadata while the underlying WCF framework handles the rest (this is over simplifying a little but it gets the point across).

sync SQL Server 2005 login passwords

Background:
We are running a web application where each user has a login to the system. The application login is mapped to an actual SQL Server 2005 login (which we needed to create). Our development and disaster recovery sites are simply copies of this setup. On a nightly basis, the production database is backed up, the dump is archived, and we restore dev and DR using this file. When this is done, we need to run sp_change_users_login for each user to remap the database user to the SQL login.

Problem:
When the user changes their password on production, the SQL login password is changed. This is not getting synced to dev/DR, so if they try to log on to one of those sites, they can't, and need to reset their password. Is there a [good] way to keep these SQL logins synced across multiple installs?

The next version of this product eliminates the SQL login need, but upgrading is not a current priority.

From stackoverflow
  • Script the logins with the password hashed and then drop and re-create them on your target server after you drop the database and before you restore the database back-up. That's how we script SQL2005 logins with our scripter software. You might like to try the software - www.dbghost.com - or build your own solution.

    caseyboardman : Accepted, thank you. I will post example code soon.
  • Solution:

    This is a follow up to markbaekdal's answer. Here's how I did it:

    I run the following against the production database:

    SELECT  'ALTER LOGIN ' + CAST(name AS VARCHAR) + ' WITH PASSWORD = ', password_hash, ' HASHED;'
    FROM    sys.sql_logins 
    JOIN    mydatabase..mytable
    ON      mycolumn = name
    GO
    

    and pipe it through "findstr ALTER" (ah, windows) to a file named loginUpdates.sql. I then run that file against the development and DR databases. It works like a charm.

    If you want to get really hardcore, here's a support article a coworker of mine found: http://support.microsoft.com/kb/918992.

Html.ActionLink in asp.net MVC object value in wrong format

I have a html.actionlink that i wish to display a link to a members profile page like this: http://somesite.com/members/{username}

When use the following markup

<%= Html.ActionLink(r.MemberName, "profile", new { MemberName = r.MemberName } )%>

I get a link that looks like this: http://somesite.com/members?MemberName={username}

What would i need to change in the ActionLink helper to achieve a url like this:

http://somesite.com/members/{username}

From stackoverflow
  • Assuming in your routes the username token is {username} like you show, try this:

    <%= Html.ActionLink(r.MemberName, "profile", new { username = r.MemberName } )%>
    
  • You should add the route that maps "/members/{MemberName}" before other routes in the routing table.

  • Thanks for both your responses... I did not have my route matching the value name.

    Simply ensuring that my route url matched made it work.

    Here's my code....

    //Global.asax
    routes.MapRoute(
        "Profile",
        "members/{membername}",
        new { controller = "Members", action = "Profile", memberName = "" }
    );
    
    //In the Controller
    public ActionResult Profile(string memberName)
    {
      return View();
    }
    
    //My Action Link
    <%= Html.ActionLink(r.MemberName, "profile", new { memberName = r.MemberName })%>
    

    Thanks again

WCF and Interface Inheritance - Is this a terrible thing to do?

My application has 2 "services", let's say one is a basic (integer) calculator, and one is a floating point calculator. I express these as interfaces like so:

public interface IBasicCalculator
{
 int Add( int a, int b );
}

public interface IFloatingPointCalculator
{
 double Add( double a, double b );
}

I want to expose these via WCF. Unfortunately WCF seems to be very tightly tied to the notion that every possible operation you want to expose must go through one single service interface -- you can't share sessions between services, it is cumbersome from the client side as you need to create a seperate proxy for each one, there don't seem to be any "sub-services", etc...

So, I've gathered that I need to present a "combined" interface (one might also call it a facade), like this:

[ServiceContract]
public interface ICalculatorService : IBasicCalculator, IFloatingPointCalculator
{
 [OperationContract(Name = "AddInt")]
 new int Add( int a, int b );

 [OperationContract(Name = "AddDouble")]
 new double Add( double a, double b );
}

If I do this, then WCF exposes both methods to the client, which can call them, and it all actually works.

However, "inheriting the interfaces" like that seems to be ungainly. Particularly the new int Add and new double Add. Strictly speaking, new on a method indicates hiding an underlying method, which I'm not actually doing at all. I can omit the new, but then I just get compiler warnings which amount to "I think I'm hiding this method, you need to rename it method or put 'new' on it".

So, this is a 2-part question:

  1. Am I on track with my 'combine everything into one interface' logic, or is there actually a way to expose "sub-services" or "multiple linked services" using WCF?

  2. If this is what needs to be done, is there a better way?

Thanks!

From stackoverflow
  • I would say generally, no. Remember, you are dealing with a distributed application technology, not a distributed object technology, so concepts like inheritance don't apply.

    In general, I wouldn't go down this path, but rather, have specific contracts which represent the logical grouping of operations that you want to expose through the endpoint.

  • I just worked out that you can expose multiple endpoints (each using a different interface) to the same service, and you still only need generate ONE proxy lib on the client which gives access to all of them, which solves my problem entirely.

    Will close the question as it's no longer relevant (need 2 more votes from other users to do so). Cheers

  • I believe that what you are describing is not really a best practice though. Sure, it is feasible to implement more than one Service contract on a Service type, since it is just a matter of implementing multiple interfaces.

    WCF certainly makes it feasible to support multiple endpoints that communicate using unique URIs and independent contracts. However, the fact that the ClientBase class only accepts one contract interface type, for example, pretty much implies that proxy classes, even if they are stored in the same lib, still need distinctly implement only one contract interface.

    If you are actually successful in creating only one proxy class definition, I would love to know how you managed to accomplish this. I may be misunderstanding your needs though. Implementing different proxy classes gives you the ultimate flexibility because the OperationContractAtrribute values like IsInitiating and IsTerminating will likely be different for different contracts. Combining the interfaces of two contracts, as in your example, potentially changes the way you would attribute the methods on the Service contract.

    EnocNRoll : Incidentally, I am using a marker interface as the base for all of my data contracts. This allows me to generically deal with data contracts within my WCF framework.
    Orion Edwards : I just used the visual studio "add service reference" tool - it generated one client library - in that library there was one proxy class per endpoint (hence many classes in the one library), but all of the classes "proxy" off the single object on the server so it doesn't matter that they're seperate
    EnocNRoll : Cool, so you are implementing more than one service contract on a single service. Cool, so that means that your sticking point was never about the number of proxies, but really the number of services, and you now know how to accomplish what you are describing. Cool.

Why is the Cancel button is not showing up in the asp:Wizard control?

Here is a code snippet for an asp:Wizard control I have on an aspx page, but the Cancel button is not showing up. Only next and previous buttons are showing. Does anyone know how to resolve this?

<asp:Wizard 
    runat="server" 
    OnFinishButtonClick="wizCreateUser_Finish" 
    OnNextButtonClick="wizCreateUser_StepChange"
    HeaderText="SpecEx Application Form" 
    Font-Names="Arial" 
    BackColor="#336699" 
    ForeColor="White" 
    ID="wizCreateUser" 
    Font-Bold="False" 
    Style="border: outset 1px black;"
    Height="320px" 
    Width="861px" 
    ActiveStepIndex="0" 
    DisplaySideBar="False" 
    StartNextButtonImageUrl="~/images/next-btn.PNG" 
    StepNextButtonImageUrl="~/images/next-btn.PNG" 
    StepNextButtonType="Image" 
    StartNextButtonText="" 
    StartNextButtonType="Image" 
    StepNextButtonText="" 
    FinishCompleteButtonImageUrl="~/images/finish-btn.png" 
    FinishCompleteButtonType="Image" 
    FinishPreviousButtonImageUrl="~/images/previous-btn.png" 
    FinishPreviousButtonText="" 
    FinishPreviousButtonType="Image" 
    StepPreviousButtonImageUrl="~/images/previous-btn.png" 
    StepPreviousButtonText="" StepPreviousButtonType="Image"
    CancelButtonImageUrl="~/images/cancel-btn.png"  
    CancelButtonText="" 
    CancelButtonType="Image">
</asp:Wizard>
From stackoverflow
  • You need the "DisplayCancelButton" property set to "True".

    Xaisoft : Thank you, that was it. Is there a way to have the cancel button on the left of the next and previous button?
    Jay S : You may want to refer to: http://msdn.microsoft.com/en-us/library/system.web.ui.webcontrols.wizard_members.aspx There are a bunch of templating options there to allow you to change the layout of the Wizard control. A quick google will get you some examples of how to template navigation controls.

How/Where to handle ConfigurationErrorsException in a windows service?

I have a windows service that has a custom configuration section. In the configSectionHandler class I am using attributes on the properties to validate the settings like this:

    //ProcessingSleepTime Property
    [ConfigurationProperty("ProcessingSleepTime", DefaultValue = 1000, IsRequired = false)]
    [IntegerValidator(MinValue = 5, MaxValue = 60000)]
    public Int32 ProcessingSleepTime
    {
        get
        {
            if (this["ProcessingSleepTime"] == null)
                return 100;

            return (Int32)this["ProcessingSleepTime"];
        }
        set
        {
            this["ProcessingSleepTime"] = value;
        }
    }

If a value in the configuration file fails validation, a ConfigurationErrorsException is thrown. In a windows service this happens as it is trying to start and it's really ugly (it offers to launch the debugger). How can I gracefully handle this error? I tried wrapping the OnStart method in a try/catch but it had no effect.

Thanks.

From stackoverflow
  • First, check if your configuration contains the key that you're looking for, then wrap it in a try catch, then check if it's a valid integer:

    int retValue = 100;
    if(this.ContainsKey("ProcessingSleepTime"))
    {
        object sleepTime = this["ProcessingSleepTime"];
        int sleepInterval;
        if(Int32.TryParse(sleepTime.ToString(), out sleepInterval)
        {
           retValue = sleepInterval;
        }
    }
    return retValue;
    
  • Or better yet (as you might need multiple such a properties), using the code from @Ricardo Villiamil, create:

    int GetIntFromConfigSetting(string settingName, int defaultValue)
    {
       int retValue = defaultValue;
       if(this.ContainsKey(settingName))
       {
          int sleepInterval;
          if(Int32.TryParse(this[settingName], out sleepInterval)
          {
             retValue = sleepInterval;
          }
       }
       return retValue;
    }
    

    Then use it from any property you need to.

    EDIT: actually, after re-reading the question once more, looks like this solves your problem only half-way, as if the value is out of the defined range, it will throw exception anyway.

    EDIT2: You can hook the AppDomain.UnhandledException event in the static ctor of your config section handler. The static ctor is ran before any instance or static member of a class are accessed, so it guarantees that you will intercept the exception even if the main method of your service is not yet called.

    And then, when you intercept and log the error, you can exit the service with some error code != 0 ( Environment.Exit(errorCode) ), so the service manager knows it failed, but not try to invoke a debugger.

    Loki Stormbringer : This works beautifully! Thanks!
  • Ok I think I have it. In my service I have code that looks like this in the constructor:

    config = ConfigurationManager.GetSection("MyCustomConfigSection") as MyCustomConfigSectionHandler;

    This is where the error is thrown. I can catch the error and log it. The error must be rethrown in order to prevent the service from continuing. This still causes the ugly behavior but at least I can log the error thereby informing the user of why the service did not start

    Sunny : No need to re-throw, just exit with error code - see my answer.

DropDownList OnSelectedIndexChange to 0th index w/out ViewState

I did follow the article TRULLY Understanding ViewState (great article btw) and populating my drop down list is working great. I've even setup a OnSelectedIndexChange event which fires almost as great.

The problem I've found is the SelectedIndexChanged event won't fire when selecting the 0th index. It does all other times however.

Here's some code:

<asp:DropDownList runat="server" ID="DropDownList1" EnableViewState="false" 
AutoPostBack="True" OnSelectedIndexChanged="DropDownList1_SelectedIndexChanged" />


protected override void OnInit(EventArgs e)
{
    this.DropDownList1.DataTextField = "Text";
    this.DropDownList1.DataValueField = "Value";
    this.DropDownList1.DataSource = fillQueueDropDown();
    this.DropDownList1.DataBind();

    base.OnInit(e);
}

protected void DropDownList1_SelectedIndexChanged(object sender, EventArgs e)
{
    OnSelectedQueueChanged(e);
}

public void OnSelectedQueueChanged(EventArgs e)
    {
        // Do stuff.
    }

public event EventHandler queueNamesChangedEvent;
public void OnSelectedQueueChanged(EventArgs e)
    {
        if (queueNamesChangedEvent != null)
            queueNamesChangedEvent(this, e);
    }

I suppose I can do some type of check in the Page_Load method:

  if(ViewState["selectedIndexChangedFlag"] != 1)
      // raise OnSelectedChange event

Or is there something I can setup in the OnInit() method where I'm rebinding this data everytime that i can do?

See, my custom EventHander raises an event which is caught by a the parent page in which this control resides, so that the parent could take some action using the newly selected value. And this is currently working for all cases where the selected index > 0.

I create a property in this control which contains the most recently selected index, in which case my parent page can action on this property value on every Page_Load... dunno.

Open to suggestions. Or how to force this SelectedIndexChanged event to fire for that 0th index selection.

From stackoverflow
  • Just a random question - do you have a value assigned to the 0th index item or is it an empty string (e.g. a -Select One- option)?

    Dave : Yeah, I usually have a value of some type in the 0th index.
  • The problem is that you are loading the data each time and this is resetting the selected index. Imagine this is your dropdown:

    zero [selected]
    one
    two
    

    Then in the client you change the selected index:

    zero
    one [selected]
    two
    

    This populates the hidden input __EVENTARGUMENT with your new index (1) and the hidden input __EVENTTARGET with the id of your dropdown. Now the server-side code kicks in and reloads your data:

    zero [selected]
    one
    two
    

    "zero" is the selected value because that is the default when the data is loaded. Then ASP.NET looks for __EVENTTARGET and __EVENTARGUMENT in the Request and finds your dropdown's id and finds the new index (1). Now your dropdown looks like this:

    zero 
    one [selected]
    two
    

    Since the index has changed, the dropdown raises its SelectedIndexChanged event indicating that the index has changed. Obviously this is the part that is working, now lets see why selecting the first item in the list does not raise the event.

    Now lets say that we still have the dropdown in the state it was just in (with "one" being selected and the selected index of 1). What happens when we select the first item in the list on the client?

    __EVENTTARGET and __EVENTARGUMENT are populated with the id of the dropdown and the new index (0). Then the server loads the data into the dropdown and the dropdown now looks like this again:

    zero [selected]
    one
    two
    

    Notice that since you reloaded the data before the events fired the index is already set to 0 because that is the default. Now when your event fires and the dropdown's selected index is set to 0, the dropdown does not see this as a change since the selected index (as far as it knows) has not changed.

    Here is how to fix the problem:

    protected override void OnLoad(EventArgs e)
    {
        base.OnLoad(e);
    
        if (!Page.IsPostBack)
        {
         this.DropDownList1.DataTextField = "Text";
         this.DropDownList1.DataValueField = "Value";
         this.DropDownList1.DataSource = fillQueueDropDown();
         this.DropDownList1.DataBind();
        }    
    }
    

    What this will do is only load the data into the dropdown if the page is not a postback. This means that ViewState will maintain the data for you as well as the selected index so that when you post back the dropdown will compare the new index to the index you saw in the client.

    Dave : A very good explanation!
  • My goal with disabling the ViewState on this drop down list is to minimize the size of the ViewState for the page.

    The problem I had with only doing the if(!Page.IsPostBack){...DataBind()...}, is that when you select an item for the first time, and the page reloads, my drop down list becomes empty.

    What I ended up doing was creating another Property on this control, LastIndex. When the OnSelectedIndexChanged event fires, I update the LastIndex value. In the Page_Load, I compare the Current and Last index values, if they're different, then fire a Index changed event.

        public int SelectedValue{
            get { return this.DropDownList1.SelectedItem.Value; }
        }
    
        public int LastIndex{
            get { return this.ViewState["lastIndex"] == null ? -1 : (int)this.ViewState["lastIndex"]; }
            set { this.ViewState["lastIndex"] = value; }
        }
    
        protected override void OnInit(EventArgs e){
            base.OnInit(e);
            this.DropDownList1.DataTextField = "Text";
            this.DropDownList1.DataValueField = "Value";
            this.DropDownList1.DataSource = fillQueueDropDown();
            this.DropDownList1.DataBind();
        }
    
        protected void Page_Load(object sender, EventArgs e){
            if (this.LastIndex != this.SelectedValue)
                this.OnSelectedQueueChanged(new EventArgs());
        }
    
        private ListItemCollection fillQueueDropDown(){...}
    
        protected void DropDownList1_SelectedIndexChanged(object sender, EventArgs e){
            OnSelectedQueueChanged(e);
            this.LastIndex = this.SelectedValue;
        }
    
        public event EventHandler queueNamesChangedEvent;
        public void OnSelectedQueueChanged(EventArgs e){
            if (queueNamesChangedEvent != null)
                queueNamesChangedEvent(this, e);
        }
    

    You are right though. The data is re-loaded and re-bound in the OnInit phase. Then the ViewState is restored (and when the 0th index is restored), when we finally get to the Events phase, the control doesn't detect the change.

    Not sure this is the most elegant route, but it's working good so far.

    Then i found this in the msdn docs for IPostBackDataHandler:

      public virtual bool LoadPostData(string postDataKey, 
         NameValueCollection postCollection) {
    
         String presentValue = Text;
         String postedValue = postCollection[postDataKey];
    
         if (presentValue == null || !presentValue.Equals(postedValue)) {
            Text = postedValue;
            return true;
         }
    
         return false;
      }
    

    Since the present value is the same as the changed-to value, the event isn't fired.

    Andrew Hare : +1 Very nice - this is an excellent way to do it without ViewState! Sorry I didn't notice you didn't want ViewState - I will read the question more carefully next time.
    Dave : Thank you for your initial solution, it really helped shed some light onto the order of things. I guess i don't know the asp.net page life cycle as well as i thought.
  • Thank! Very thorough, and solved my problem! :) It's always nice to know WHY the ASP.NET solution works the way it does. ;)

Msbuild task - Build fails because one solution being built in release instead of debug

I'm hitting a weird issue with msbuild, and I'm thinking it might just be an environment issue. We're using msbuild to build a number of separate solutions, and it seems to work on my machine. But on a couple other machines, it's not working. I've tracked down the issue, and it looks like one of the solutions (or one of the projects in that solution) is being built in "Release" when everything else is being built in "Debug". This causes some of the references to be missing - so the solution being built in "Release" fails to build. I want all solutions to be built in "Debug". Here is our current TFSBuild.proj file - what are we doing wrong?

    <?xml version="1.0" encoding="utf-8"?>
<Project DefaultTargets="DesktopBuild" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
  <!-- TO EDIT BUILD TYPE DEFINITION

  To edit the build type, you will need to edit this file which was generated
  by the Create New Build Type wizard.  This file is under source control and
  needs to be checked out before making any changes.

  The file is available at -
      $/{TeamProjectName}/TeamBuildTypes/{BuildTypeName}
  where you will need to replace TeamProjectName and BuildTypeName with your
  Team Project and Build Type name that you created

  Checkout the file
    1. Open Source Control Explorer by selecting View -> Other Windows -> Source Control Explorer
    2. Ensure that your current workspace has a mapping for the $/{TeamProjectName}/TeamBuildTypes folder and 
       that you have done a "Get Latest Version" on that folder
    3. Browse through the folders to {TeamProjectName}->TeamBuildTypes->{BuildTypeName} folder
    4. From the list of files available in this folder, right click on TfsBuild.Proj. Select 'Check Out For Edit...'


  Make the required changes to the file and save

  Checkin the file
    1. Right click on the TfsBuild.Proj file selected in Step 3 above and select 'Checkin Pending Changes'
    2. Use the pending checkin dialog to save your changes to the source control

  Once the file is checked in with the modifications, all future builds using
  this build type will use the modified settings
  -->
  <!-- Do not edit this -->
  <Import Project="$(MSBuildExtensionsPath)\Microsoft\VisualStudio\v8.0\TeamBuild\Microsoft.TeamFoundation.Build.targets" />
  <ProjectExtensions>
    <!--  DESCRIPTION
     The description is associated with a build type. Edit the value for making changes.
    -->
    <Description>
    </Description>
    <!--  BUILD MACHINE
     Name of the machine which will be used to build the solutions selected.
    -->
    <BuildMachine>xxxx-test</BuildMachine>
  </ProjectExtensions>
  <PropertyGroup>
    <!--  TEAM PROJECT
     The team project which will be built using this build type.
    -->
    <TeamProject>CFAST</TeamProject>
    <!--  BUILD DIRECTORY
     The directory on the build machine that will be used to build the
     selected solutions. The directory must be a local path on the build
     machine (e.g. c:\build).
    -->
    <BuildDirectoryPath>C:\Build\brtcn</BuildDirectoryPath>
    <SolutionRoot Condition=" '$(IsDesktopBuild)'!='true' ">$(BuildDirectoryPath)\w</SolutionRoot>
    <!--  DROP LOCATION
      The location to drop (copy) the built binaries and the log files after
     the build is complete. This location has to be a valid UNC path of the
     form \\Server\Share. The build machine service account and application
     tier account need to have read write permission on this share.
    -->
    <DropLocation>\\my-files\xx-files\Build Drop\xx</DropLocation>
    <!--  TESTING
     Set this flag to enable/disable running tests as a post build step.
    -->
    <RunTest>false</RunTest>
    <!--  WorkItemFieldValues
      Add/edit key value pairs to set values for fields in the work item created
      during the build process. Please make sure the field names are valid 
      for the work item type being used.
    -->
    <WorkItemFieldValues>Symptom=build break;Steps To Reproduce=Start the build using Team Build</WorkItemFieldValues>
    <!--  CODE ANALYSIS
       To change CodeAnalysis behavior edit this value. Valid values for this
       can be Default,Always or Never.

     Default - To perform code analysis as per the individual project settings
     Always  - To always perform code analysis irrespective of project settings
     Never   - To never perform code analysis irrespective of project settings
     -->
    <RunCodeAnalysis>Never</RunCodeAnalysis>
    <!--  UPDATE ASSOCIATED WORK ITEMS
     Set this flag to enable/disable updating associated workitems on a successful build
    -->
    <UpdateAssociatedWorkItems>true</UpdateAssociatedWorkItems>
    <!-- Title for the work item created on build failure -->
    <WorkItemTitle>Build failure in build:</WorkItemTitle>
    <!-- Description for the work item created on build failure -->
    <DescriptionText>This work item was created by Team Build on a build failure.</DescriptionText>
    <!-- Text pointing to log file location on build failure -->
    <BuildlogText>The build log file is at:</BuildlogText>
    <!-- Text pointing to error/warnings file location on build failure -->
    <ErrorWarningLogText>The errors/warnings log file is at:</ErrorWarningLogText>
  </PropertyGroup>
  <ItemGroup>
    <!--  SOLUTIONS
     The path of the solutions to build. To add/delete solutions, edit this
     value. For example, to add a solution MySolution.sln, add following line -
         <SolutionToBuild Include="C:\Project Files\path\MySolution.sln" />

     To change the order in which the solutions are build, modify the order in
     which the solutions appear below.
    -->
    <SolutionToBuild Include="..\..\xx1\xxx\Solution\xxx Solution.sln" />
    <SolutionToBuild Include="..\..\xx2\Build Solutions\xx xx xx Build\Data Port Common Build.sln" />
    <SolutionToBuild Include="..\..\xx3\Build Solutions\xxx Build\xxx Build.sln" />

  </ItemGroup>
  <ItemGroup>
    <!--  CONFIGURATIONS
     The list of configurations to build. To add/delete configurations, edit
     this value. For example, to add a new configuration, add following lines -
         <ConfigurationToBuild Include="Debug|x86">
             <FlavorToBuild>Debug</FlavorToBuild>
             <PlatformToBuild>x86</PlatformToBuild>
         </ConfigurationToBuild>

     The Include attribute value should be unique for each ConfigurationToBuild node.
    -->
    <ConfigurationToBuild Include="Debug|Any CPU">
      <FlavorToBuild>Debug</FlavorToBuild>
      <PlatformToBuild>Any CPU</PlatformToBuild>
    </ConfigurationToBuild>
  </ItemGroup>
  <ItemGroup>
    <!--  TEST ARGUMENTS
     If the RunTest is set to true then the following test arguments will be
     used to run tests.

     To add/delete new testlist or to choose a metadata file (.vsmdi) file, edit this value.
     For e.g. to run BVT1 and BVT2 type tests mentioned in the Helloworld.vsmdi file, add the following -

     <MetaDataFile Include="C:\Project Files\HelloWorld\HelloWorld.vsmdi">
         <TestList>BVT1;BVT2</TestList>
     </MetaDataFile>

     Where BVT1 and BVT2 are valid test types defined in the HelloWorld.vsmdi file.
     MetaDataFile - Full path to test metadata file.
     TestList - The test list in the selected metadata file to run.

     Please note that you need to specify the vsmdi file relative to $(SolutionRoot)
    -->
    <MetaDataFile Include=" ">
      <TestList> </TestList>
    </MetaDataFile>
  </ItemGroup>
  <ItemGroup>
    <!--  ADDITIONAL REFERENCE PATH
     The list of additional reference paths to use while resolving references.
     For example,
         <AdditionalReferencePath Include="C:\MyFolder\" />
         <AdditionalReferencePath Include="C:\MyFolder2\" />
    -->
    <AdditionalReferencePath Include="..\..\xxxx\Reference Files" />
    <AdditionalReferencePath Include="..\..\xxxx\Build Support\SomeCoolLibrary" />
    <AdditionalReferencePath Include="..\..\xxxx1.1\Reference Files " />
  </ItemGroup>
</Project>
From stackoverflow
  • Make sure that all your build scripts have this section:

    <ConfigurationToBuild Include="Debug|Any CPU">
      <FlavorToBuild>Debug</FlavorToBuild>
      <PlatformToBuild>Any CPU</PlatformToBuild>
    </ConfigurationToBuild>
    

    and that it is set to "Debug".

    leftend : Does that section need to be in the same "ItemGroup" as the "SolutionToBuild" statements?
  • Had a similar issue. Answer here.

    Hope this helps.

  • Here is a sample of our build file that forces the build to be made in release or debug mode <

    MSBuild Projects="$(RootPath)\Source\Website.csproj"
                     Targets="ResolveReferences;_CopyWebApplication" Properties="Configuration=$(BuildConfiguration);WebProjectOutputDir=$(DeploymentFolder);OutDir=$(DeploymentFolder)\bin\" />
    

    Configuration=$(BuildConfiguration) is the key in the above sample. Replace the $BuildConfiguration with your build environment or provide it from the command line(which is what we do). This will force the build in the mode you want. Hope this helps.