Friday, April 8, 2011

Add new templates in Xcode

How to add new templates in Xcode for iPhone application developement?

From stackoverflow
  • For some advices, I suggest to take a look at answers to a related question: Change templates in Xcode.

    In particular, if you want your modifications not to be lost whenever you upgrade Xcode, put your templates in ~/Library/Developer/Shared/Xcode, not under /Developer.

  • From my answer to this question:

    XCode uses template files for file and project templates and does variable expansion in both at creation time.

    XCode 3.0-3.1 templates can be found in [Dev Installation]/Library/Xcode/, likely /Developer/Library/XCode. If you want to modify these templates or add your own, use the following directories to save your new/modified templates so that they are not wiped out by future Developer Tool upgrades:

    • File templates: ~/Library/Developer/Shared/Xcode/File Templates/
    • Target templates: ~/Library/Developer/Shared/Xcode/Target Templates/
    • Project templates: ~/Library/Developer/Shared/Xcode/Project Templates/

    I think that you can also use the /Library/Developer/Shared/Xcode/[File|Target|Project] Templates/ directory for templates shared by all users.

    A good tutorial on writing file templates is here [MacResearch.org].

  • If it helps, i've created a tutorial to demonstrate how you can change the existing project templates. The method i use prevents that the templates are overwritten upon installing a new version of Xcode. I've tested this method using Xcode version 3.2.2 and 3.2.3 (beta)

    http://www.sodeso.nl/?p=895

  • I wrote an article on how to create a new Xcode template from an existing project here.

    It covers:

    • specifying your project files are relative to the project path
    • using the commandline to search / replace & fiddle file permissions
    • excluding info.plist from the target membership
    • giving your template an icon

    It doesn't cover where to put the template. That information is covered in other answers.

Type parameters versus member types in Scala

I'd like to know how do the member types work in Scala, and how should I associate types.

One approach is to make the associated type a type parameter. The advantages of this approach is that I can prescribe the variance of the type, and I can be sure that a subtype doesn't change the type. The disadvantages are, that I cannot infer the type parameter from the type in a function.

The second approach is to make the associated type a member of the second type, which has the problem that I can't prescribe bounds on the subtypes' associated types and therefore, I can't use the type in function parameters (when x : X, X#T might not be in any relation with x.T)

A concrete example would be:

I have a trait for DFAs (could be without the type parameter)

trait DFA[S] { /* S is the type of the symbols in the alphabet */
  trait State { def next(x : S); }
  /* final type Sigma = S */
}

and I want to create a function for running this DFA over an input sequence, and I want

  • the function must take anything <% Seq[alphabet-type-of-the-dfa] as input sequence type
  • the function caller needn't specify the type parameters, all must be inferred
  • I'd like the function to be called with the concrete DFA type (but if there is a solution where the function would not have a type parameter for the DFA, it's OK)
  • the alphabet types must be unconstrained (ie. there must be a DFA for Char as well as for a yet unknown user-defined class)
  • the DFAs with different alphabet types are not subtypes

I tried this:

def runDFA[S, D <: DFA[S], SQ <% Seq[S]](d : D)(seq : SQ) = ....

this works, except the type S is not inferred here, so I have to write the whole type parameter list on each call site.

def runDFA[D <: DFA[S] forSome { type S }, SQ <% Seq[D#Sigma]]( ... same as above

this didn't work (invalid circular reference to type D??? (what is it?))

I also deleted the type parameter, created an abstract type Sigma and tried binding that type in the concrete classes. runDFA would look like

def runDFA[D <: DFA, SQ <% Seq[D#Sigma]]( ... same as above

but this inevitably runs into problems like "type mismatch: expected dfa.Sigma, got D#Sigma"

Any ideas? Pointers?

Edit:

As the answers indicate there is no simple way of doing this, could somebody elaborate more on why is it impossible and what would have to be changed so it worked?

The reasons I want runDFA ro be a free function (not a method) is that I want other similar functions, like automaton minimization, regular language operations, NFA-to-DFA conversions, language factorization etc. and having all of this inside one class is just against almost any principle of OO design.

From stackoverflow
  • Scala's type inference sometimes leaves much to be desired.

    Is there any reason why you can't have the method inside your DFA trait?

    def run[SQ <% Seq[S]](seq: SQ)
    

    If you don't need the D param later, you can also try defining your method without it:

    def runDFA[S, SQ <% Seq[S]](d: DFA[S])(seq: SQ) = ...
    
  • First off, you don't need the parameterisation SQ <% Seq[S]. Write the method parameter as Seq[S]. If SQ <% Seq[S] then any instance of it is implicitly convertable to Seq[S] (that's what <% means), so when passed as Seq[S] the compiler will automatically insert the conversion.

    Additionally, what Jorge said about type parameters on D and making it a method on DFA hold. Because of the way inner classes work in Scala I would strongly advise putting runDFA on DFA. Until the path dependent typing stuff works, dealing with inner classes of some external class can be a bit of a pain.

    So now you have

    trait DFA[S]{
      ...
    
      def runDFA(seq : Seq[S]) = ...
    }
    

    And runDFA is all of a sudden rather easy to infer type parameters for: It doesn't have any.

Equivalent of double-clickable .sh and .bat on Mac?

I am distributing a Java program where I want a double-clickable file to run

java -cp MyProgram.jar;MyLib.jar my.program.Main

On Windows I simply distribute a .bat file, for *nix an executable .sh file. Problem is, double-clicking the .sh file just opens it up in a text editor on Mac. What should I do for Mac?

From stackoverflow
  • You can use a .sh (Shell Script), after all MacOSX is Unix!

  • On mac, there is a specific extension for executing shell scripts by double clicking them: this is .command.

    Epaga : works like a charm!
  • For Java applications on Mac, you really should use Apple's Jar Bundler (in the Developer Tools/Applications/Utilities folder; really a symlink to /usr/share/java/Tools/Jar Bundler). It lets you make a proper OS X double-clickable app, including setting preferences for e.g. using the Mac toolbar, JVM version, graphics system, OS X app metadata and classpath/resources.

  • The answer about using the Jar Bundler tool is correct, but if you want to use a .sh file, make sure the unix permissions are set properly to something like 755 with CHMOD, and make sure the first line contains the path to a shell installed by default on Mac OS X. Also note that even with the +x bit set, it may still ask the user whether they want to open it or run it.

Google Translation API

Has anyone used Google translation API ? What is the max length limit for using it?

From stackoverflow
  • 500 characters

    source

    Ken Browning : it could be. i just googled for you. i have never used the service.
    duffymo : Wrong. See the comment in the link you cited right under the one that quoted 500 chars.
    Ken Browning : I don't argue that I might be wrong. However, that comment is talking about a wrapper which removes character limitations. Apples and oranges imho.
  • I've used it to translate Japanese to English.

    I don't believe the 500 char limit is true if you use http://code.google.com/p/jquery-translate/, but one thing that is true is you're restricted as to the number of requests you can make within a certain period of time. They also try to detect whether or not you're sending a lot of requests with a similar period, almost like a mini "denial of service" attack.

    So when I did this I wrote a client with a random length sleep between requests. I also ran it on a grid so all the requests didn't come from a single IP address.

    I had to translate ~2000 Java messages from a resource bundle from Japanese to English. It worked out pretty nicely, as long as the text was single words. Longer phrases with context came out awkwardly.

  • The limit was 500... now it is 5000 chars.

Problem with php function to catch error on saving image

I have a method to save an image, which is meant to deal gracefully with an error, setting $imageSrc to a particular image in the event of failure. My method works fine if the image is present, but no error conditions seems to be handled correctly.

$imageSrc = save_pic($PIC_URL, $pk);

function save_pic($pic_url, $pk) {
    $imageDir = './';
    if (!strlen($pic_url))
            return "removed.jpg";
    if (!is_dir($imageDir) || !is_writable($imageDir)) {
     return "removed.jpg";
    }
    $image = file_get_contents($pic_url);
    if (empty($image)) {
     return "removed.jpg";
    }
    $r = file_put_contents($imageDir.$pk.".jpg", $image);
    if ($r) {
            return "./$pk.jpg"; 
    } else {
            return "removed.jpg";
    }
}

If the image does not exist, I get :

Warning: getimagesize(http://127.0.0.1/555.jpg) [function.getimagesize]: failed to open stream: HTTP request failed! HTTP/1.1 404 Not Found
in C:\Program Files\EasyPHP 2.0b1\www\get_auction.php on line 144

Array (
 [type] => 2 [message] => getimagesize(http://127.0.0.1/555.jpg)
 function.getimagesize]: failed to open stream: HTTP request failed! HTTP/1.1 404 Not Found
 [file] => C:\Program Files\EasyPHP 2.0b1\www\get_auction.php
 [line] => 144
) 1

returned.jpg is never returned in any event.

edit: added imageResize code:

function imageResize($imageURL, $maxWidth, $maxHeight)

{

global $outputWidth, $outputHeight, $maxWidth, $maxHeight;

$size = getimagesize($imageURL);

if ($size) {

    $imageWidth  = $size[0];

    $imageHeight = $size[1];

    $wRatio = $imageWidth / $maxWidth;

    $hRatio = $imageHeight / $maxHeight;

    $maxRatio = max($wRatio, $hRatio);



    if ($maxRatio > 1) {

        $outputWidth = $imageWidth / $maxRatio;

        $outputHeight = $imageHeight / $maxRatio;

    } else {

        $outputWidth = $imageWidth;

        $outputHeight = $imageHeight;

    }

} else {

    die(print_r(error_get_last()));

}

}
From stackoverflow
  • Are you sure you're writing to the root folder? Because I can't see any problem with your code. file_get_contents and file_put_contents don't seem to be failing, so your image is being written somewhere.

    Joshxtothe4 : but it doesnt deal with the file not existing in order to get
    Can Berk Güder : so file_get_contents *should* return FALSE but doesn't, is that correct?
    Joshxtothe4 : Exactly, The problem is not with the folder being writable, but with the image not existing at the url
  •  $image = file_get_contents("http://example.com/test.png");
     list($ver, $retcode, $message) = explode(' ', $http_response_header[0], 3);
     if ($retcode != 200) {
       return "removed.jpg";
     }
    

    $retcode will contain HTTP response code.

    Please post this $retcode here and what your strlen($image) returns, it might help to resolve your problem.

    Joshxtothe4 : the error code is 404, it states it in the error message
    Quassnoi : Yes, and he can parse it in the code.
    Joshxtothe4 : I put this in my code as you suggest, and only the file_Get_contens error is output, nothing after that.
    Quassnoi : Does it ever get after file_get_contents? Put an echo right after file_get_contents and see if it gets there.
    Dominic Rodger : ah... the wonder that is debugging PHP
  • Try this:

    <?php
    $imageSrc = save_pic($PIC_URL, $pk);
    
    function save_pic($pic_url, $pk) 
    {
        $imageDir = './';
    
        if (!strlen($pic_url))
        {
            return 'removed.jpg';
        }
    
        if(!is_dir($imageDir) || !is_writable($imageDir)) 
        {
            return 'removed.jpg';
        }
    
        if(!file_exists($pic_url))
        {
            return 'removed.jpg';
        }
    
        if (file_put_contents($imageDir . $pk . '.jpg', file_get_contents($pic_url))) 
        {
                return $imageDir . $pk . '.jpg'; 
        } 
        else 
        {
                return 'removed.jpg';
        }
    }
    
    Joshxtothe4 : Hi Bart, this had the same exact behavior.
    Bart S. : Can you post the code from C:\Program Files\EasyPHP 2.0b1\www\get_auction.php around line 144? Because that is what's causing this error: C:\Program Files\EasyPHP 2.0b1\www\get_auction.php on line 144
    Joshxtothe4 : line 144 is just $imageSrc = save_pic($PIC_URL, $pk);
  • Debug?

    Get an IDE that includes a debugger, like Eclipse or Netbeans, rtm for functions to see how the respond, or do bad-old inline debugging to echo the value of code at runtime.

  • It seems to me you're not posting your complete code?

    The warning message says getimagesize(), yet nowhere is getimagsize() used in your example. To receive better help I would include the whole method or an updated error message of your current efforts. Please also include the PHP version you're using.

    file_get_contents() will return false in case of errors and does so on 404 HTTP errors, as demonstrated:

    mfischer@testing01:~$ php -r 'var_dump(file_get_contents("http://stackoverflow.com/i_do_not_exist.jpg"));'
    
    Warning: file_get_contents(http://stackoverflow.com/i_do_not_exist.jpg): failed to open stream: HTTP request failed! HTTP/1.1 404 Not Found
     in Command line code on line 1
    
    Call Stack:
        0.0002      51884   1. {main}() Command line code:0
        0.0002      51952   2. file_get_contents() Command line code:1
    
    bool(false)
    mfischer@testing01:~$ php -v
    PHP 5.2.6-5 with Suhosin-Patch 0.9.6.2 (cli) (built: Oct  5 2008 13:07:13)
    

    Trying out your code it works perfectly fine for me here:

    $ cat test.php
    <?php
    $PIC_URL="http://stackoverflow.com/i_dont_exist.jpg";
    $pk = "test";
    $imageSrc = save_pic($PIC_URL, $pk);
    var_dump($imageSrc);
    
    function save_pic($pic_url, $pk) {
        $imageDir = './';
        if (!strlen($pic_url))
                return "removed.jpg";
        if (!is_dir($imageDir) || !is_writable($imageDir)) {
            return "removed.jpg";
        }
        $image = file_get_contents($pic_url);
        if (empty($image)) {
            return "removed.jpg";
        }
        $r = file_put_contents($imageDir.$pk.".jpg", $image);
        if ($r) {
                return "./$pk.jpg";
        } else {
                return "removed.jpg";
        }
    }
    $ php test.php
    
    Warning: file_get_contents(http://stackoverflow.com/i_dont_exist.jpg): failed to open stream: HTTP request failed! HTTP/1.1 404 Not Found
     in /home/mfischer/tmp/532480/test.php on line 14
    
    Call Stack:
        0.0005      59824   1. {main}() /home/mfischer/tmp/532480/test.php:0
        0.0005      60176   2. save_pic() /home/mfischer/tmp/532480/test.php:4
        0.0006      60444   3. file_get_contents() /home/mfischer/tmp/532480/test.php:14
    
    string(11) "removed.jpg"
    

    PHP functions have the bad habit of just spilling out warning messages right in your code, if you don't like this you can silence them with the '@' operate as suggested before or you can alternatively use a HTTP Client library as provided by PEAR_HTTP or Zend_HTTP_Client to have better control of error handling. Rolling your own thing with sockets, fsockopen, etc. would also be possible.

    But back to the point: if it's still not working, I think there's some information missing.

    Joshxtothe4 : I have updated my question to show the ImageResize function. The problem seems to be with the 404 error however.
  • Should you not be testing for file_get_contents errors? I mean with something like

    if( false == ($image = file_get_contents($pic_url))){
            return "removed.jpg";
        }
    
    • Send HTTP HEAD to given URL
    • Check returncode == 200 (file is there) else it's removed
    • Check that Content-Type header is image/* (for example image/png) else it's removed
    • If server sent Content-Length header, read that to variable
    • file_get_contents(URL)
    • If server sent that Content-Length see if size matches else it's removed
    • Save image
    • Try getimagesize(), if it gives errors remove it via unlink() and it's removed else keep it
    • Success!
  • You are having path issues. You should store the full path to the pic in a define.

    Make sure this file is sourced from the save_pic function.

    DEFINE("RemovedPicUrl", "http://" . $_SERVER['SERVER_NAME'] . "/path/to/removed/image/removed.jpg")
    

    then change all occurances of

    "Removed.jpg"
    

    to

    RemovedPicUrl
    

    And I'll bet you a dollar that fixes your issue.

  • The problem was not a code issue, but was caused by file corruption. This was determined while testing on other machines.

Does a form reset button fire a select elements onChange event?

I have a form with some select elements that have onChange events attached to them. I would like the event to fire even when someone clicks the form reset button.

My question is: does resetting a form fire a select elements onChange event?

Here is a simple example in jQuery

<script type="text/javascript">
    $('.myselect').change(function() {
        // do something on change
    });
</script>

<form action="/" method="post">
    <select class="myselect" name="select1">
        <option value="1">First</option>
        <option value="2">Second</option>
    </select>
    <select class="myselect" name="select2">
        <option value="1">First</option>
        <option value="2">Second</option>
    </select>
    <!-- When this is clicked I would like it fire the change event -->
    <input type="reset" value="Reset" /> 
    <input type="submit" value="Save" />
</form>

Thanks!

From stackoverflow
  • There is an onReset tag for forms, so you could do something like:

    <script type="text/javascript">
        var changeFunc = function() {
            // do something on change.
        };
        $('.myselect').change(changeFunc);
    </script>
    
    <form onReset="changeFunc()" action="/" method="post">
        <select class="myselect" name="select1">
            <option value="1">First</option>
            <option value="2">Second</option>
        </select>
        <select class="myselect" name="select2">
            <option value="1">First</option>
            <option value="2">Second</option>
        </select>
        <!-- When this is clicked I would like it fire the change event -->
        <input type="reset" value="Reset" /> 
        <input type="submit" value="Save" />
    </form>
    

    This method is called before the reset happens, though so it can be tricky. I guess so you can do an "Are you sure?" box. Tossing a setTimeout in the reset function seems to do the trick.

How do I set the format of toString methods with ToStringBuilder in commons-lang?

how do I configure the setting of the format for toString

From stackoverflow

How often do you reevaluate and upgrade your development environment and dev. tools?

I was curious how often other software developers reevaluated their development environments and tools. I used to work at a large corporation with rigid toolsets that everyone hated, but could do nothing about. So nobody ever really updated their development environments because we couldn't in that environment.

Now that I'm in my own start-up I find I can spend endless time evaluating new tools and development environments, but that I really shouldn't and can't afford to. I've committed to spending 1 day a month looking at new development tools and trying them out to see if it is worth switching.

How often do you try out new IDE's, editors, bug tacking tools, debuggers? Or update to newer versions of your own?

From stackoverflow
  • It's an ongoing process, but I don't make major changes more often than every two years or so. A major change involves too much time, and the tradeoff isn't generally worth it. Major changes might be defined as changing the whole target or compiler architecture and toolchain for an existing project.

    Note that major changes can occur between projects - a new project can settle on a completely different architecture and toolchain with no significant cost. But care should be taken not to go too bleeding edge here. An evaluation process is needed to prevent selection of a setup that will not support the project later as the project grows in complexity.

    But for minor changes I simply upgrade my tools and environment as I find opportunity and reason to do so.

  • For me, upgrades are event-driven, not timer-driven. I keep my ear to the ground for new tools (libraries, IDEs, CASE tools, etc) and evaluate them as they show up on my radar.

    Working with Microsoft technologies, I move to the newest version if there's no compelling reason holding me back. With OSS, I use what I know unless there's something compelling pushing me forward.

  • I only update unless I'm really missing out on a certain piece of functionality, or realize that NOT using one tool instead of another leads to more tasks taking longer/being less efficient.

    Kyle Walsh : I'd say it's a mix of tsilb's and Jekke's approaches. I pay attention to the new releases, but as I said, only upgrade if I really need the new stuff (or find, after experimentation, that the new features are amazing AND it upgrading won't harm my other expectations of the product).
  • At work, we upgrade a tool when our version reaches end of support lifetime. We upgrade to the next-older version.

    At home, I upgrade as soon as I can find a copy of the new thing free (i.e. some deals where attending 3 webcasts will send you a copy of vs2008 std edition, user groups, etc.).

  • IDE's. I tend to stick with one I know will grow, and support my language. In my dev environment it's vim. It is actively developed, and has many many scripts(kinda like plugins) as well as documentation for DIY. Also leaning an IDE takes time, and becoming good at it, using it efficiently takes more time.

    Revision Control. I try to stay just below the bleeding edge. The benefits of new features are important. For example Subversion 1.4, only supported rudimentary merging. Subversion 1.5 has overhauled their merging system, and added new features.

    Task and project management. I tend to do that only every couple of years, and only if there is a good perceived benefit. Otherwise I will continue to upgrade my current system to the current stable release every couple of months.

    Libraries. They are a toss up. Since most everything I do does not end up in a shipped out product. I feel more free to upgrade often, but we tend to shy away from upgrading when backwards comparability is broken.

    Hope my $0.02 was useful.

  • IDEs - This can be tricky but I have gone through a few different progressions over the years. Sometimes being on a project or a specific feature can trigger an upgrade. For example, someone implemented a feature using LINQ so what was an ASP.Net 2.0 project became a 3.5 project overnight. Other times, it is just what is currently in use. A point here is that a change can impact a whole team so it isn't a change to be made lightly.

    Bug tracking tools - This is also in that land of centralized stuff that has to be carefully managed. Since this is a QA tool, I'd hope they have their own policies of how often to look for updates and when to install them as sometimes new features can be cool to get. The dev team equivalent would be when to update the wiki.

    Version control - These are individually managed since most of us use Tortoise SVN so we each have a local client copy. So, the updates are done when someone wants to do it. I like to stay up to date as much as possible, personally.

    OS - While part of this can be controlled on a department basis, there are enough different pieces to update that sometimes I'll run an update on my own. I'm not sure when we'll move to Windows 7 as I know we aren't going to Vista and I'd think at some point we'd get off XP as I've been on XP now for about 5 years as before that I was on Windows 2000 Professional for a few years and NT 4.0 before that.

    PC - There is a policy that every 3 years we get new machines I believe. When I started where I am now, I was on a P4 box, so the upgrade to a dual-core box was very nice as well as a good RAM boost from 2 GB to 4 GB.

CMD file copy from Java

I am looking to open up a command prompt and pass in a copy command, some switches, and the source file plus destination. I've tried the code below but nothing appears to be happening. What am I not seeing? What could I be doing wrong?

String line;

line = "cmd COPY /Y C:\srcfolder\112.bin C:\destfolder";

Process p = Runtime.getRuntime().exec(line);

p.waitFor();
From stackoverflow
  • Is there a reason you aren't simply copying the file in Java rather than creating a system process?

    Copying the files using Java rather than an exec call would keep your code portable.

    Cameron Pope : If this were some *nix flavor, I'd agree, but in practice it is really hard to address a lot of Windows network resources in Java and it's more robust just to execute copies and deletes in the shell.
    Software Monkey : The linked example is a terrible reference for a binary file copy; treats the file as characters (what happens if there is an odd number of bytes) and is horribly inefficient.
  • If you really have to use an external command, then you probably want to execute (notice the /C):

    CMD /C COPY /Y C:\srcfolder\112.bin C:\destfolder
    

    I recommend you use the array version of exec to avoid handling of quoting (should any files or directories contain spaces - or double-quotes - in them):

    String[] args = { "CMD", "/C", "COPY", "/Y", src_file, dest_folder };
    Process p = Runtime.getRuntime().exec(args);
    p.waitFor();
    

    Remember that this is not portable (will not work on Unix), so unless you really really need to use COPY then you should use the method linked to by bstpierre.

    Cheers, V.

    OscarRyz : What was the difference between /C and /K again?
    John T : @Oscar - /C will close the window after execution, /K will keep it open.
  • Check this out, this may help you:

    http://www.greenspun.com/bboard/q-and-a-fetch-msg.tcl?msg_id=006Tv9

  • I second bstpierre's comment.

    In reference to your particular problem, I believe that the cmd shell is not exiting after you create it. (edit: and Vlad has pointed out how to correct that)

    As an aside, for other commands in the future, don't forget to escape your backslashes: line="cmd copy /y c:\\srcfolder\\112.bin c:\\destfolder"

    Ben S : You can always vote up my answer :D
    Peter Richards : Actually, with my mighty 1 reputation, I cannot.
    Ben S : Whoops, good point. Well in that case, welcome to SO!
    Bill K : Modified your answer because you forgot to double-quote your double-backslash example :) Apparently it's needed here as well as in your Java code (requiring an overall 4 backslashes for this example to represent a single backslash!). Anyway, +1 4u
    Peter Richards : Ha! Well that probably explains Zombie8's post lacking them then. Thanks.
  • try

    line = "cmd /C COPY /Y C:\srcfolder\112.bin C:\destfolder";
    Process p = Runtime.getRuntime().exec(line);
    p.waitFor();
    

    However, you'll run into problems if you have files and folders with spaces in them. I've found the most robust way to execute commands is to use ProcessBuilder, and pass in the command with all of the arguments as parameters.

  • Use the to use the windows version.

    CMD /C COPY /Y C:\srcfolder\112.bin C:\destfolder

    An alternative: Apache Commons IO provides a nice set of libraries to handle file transfers with pure Java. Specifically look at FileUtils.copyFileToDirectory(File srcFile, File destDir)

  • Ahh, looks like someone did mention it, but I'll clarify (epically because the one that did mention it forgot to quote their backslash in the post, making it look like a single!).

    So the solutions listed are better, but but I'm fairly sure that the reason you are failing is that in Java you can never use back slashes as singles, they are the quote character so you always need \\ inside a string. And for 2 backslashes in a row, I think you need 6 or 8 of them !!?!?? look it up.

    Fixed the guy who posted it before me and gave him a +1

CMS with strict Code Review for Add-Ons?

Some popular CMSs have a huge number of add-ons and try to fix every security problem as quickly as possible, without hiding. They end up with a lot of security announcements.

This seems to be the wrong way, because they distribute broken code and fix it after that. Not intentional, but that's the picture this is painting.

Are there any free CMS projects which have a strict system of code review for any given add-on? Contributing to such a project could become tiresome, but it would be worth it.

EDIT: I'm getting mixed messages on SO and other places. If you are going to ask for any good CMS, you always get the same few in the top mentioned ones. And they have one thing in common: Many modules. These CMSs define themselves by this humongous amount of add-ons, without which they weren't half as interesting.

These add-ons are "advertised" and offered on the main project's site. A repository full of code from disastrous to awesome.

From stackoverflow
  • Avoid using add-ons which don't live up to your quality standards. Most mature CMSs are quite robust without add-ons.

    The purpose of add-ons is generally to let users add functionality without requiring the CMS publisher's input. It's the add-on developers' responsibility have robust code. I would be surprised if a free CMS project allowed add-ons, but required a strict code review for them. If the system is free and open, add-on publishers can simply release the add-ons themselves.

    If you have certain requirements for robustness and security and you can't find a solution, you can always develop it yourself. That way you can ensure that the add-on code follows your standards.

  • This is a very tough arena, especially in the "Free" markets.

    I spend a lot of time with DotNetNuke, which is an open source, CMS type framework. There are frequent releases, some of which identify security issues. Core module projects (add-ons) go through a security review with all public releases, which helps a lot.

    The third-party market though is a bit scary when it comes to security.

  • No. Add ons for each CMS are separate from the main project. Some projects have "core" modules that are reviewed and audited with the main code line as they are considered part of the CMS, rather than an add-on, even though they are implemented as add-ons.

    If you use only the core and project approved add-ons then you'll be as safe as one can be depending on the reviews they give.

    There are no free projects that currently have an auditing, signing, or certification process for third party add-ons. The resources in a free project are often simply not available.

    Lasty, please keep in mind that even a 'strict code review' will not eliminate all bugs and security risks. Such a review prevents some bugs and risks from getting through, and raises the bar, but there is no silver bullet. Such a project as you describe will certainly release security fixes from time to time.

static vs extern "C"

(expert C/C++ question) What is the difference between a static member function and an extern "C" linkage function ? For instance, when using "makecontext" in C++, I need to pass a pointer to function. Google recommends using extern "C" linkage for it, because "makecontext" is C. But I found out that using static works as well. Am I just lucky or...

class X {
   public:
   static void proxy(int i) {}
}
makecontext(..., (void (*)(void)) X::proxy, ...);

vs

extern "C" void proxy(int i) {}
makecontext(..., (void (*)(void)) proxy, ...);

EDIT: I am sorry, but I'm still not convinced... Can you show a compiler or architecture where the static member version does not work (and it's not a bug in the compiler) ?

From stackoverflow
  • extern "C" disables the C++ compiler's name mangling (which is required for overloading).

    If you declare a function in A.cpp to be static, then it cannot be found by B.cpp (it's leftover from C, and it has the same effect of putting a function inside an anonymous namespace).

    Helltone : This does not answer my question
  • Yes, you are just lucky :) The extern "C" is one language linkage for the C language that every C++ compiler has to support, beside extern "C++" which is the default. Compilers may supports other language linkages. GCC for example supports extern "Java" which allows interfacing with java code (though that's quite cumbersome).

    extern "C" tells the compiler that your function is callable by C code. That can, but not must, include the appropriate calling convention and the appropriate C language name mangling (sometimes called "decoration") among other things depending on the implementation. If you have a static member function, the calling convention for it is the one of your C++ compiler. Often they are the same as for the C compiler of that platform - so i said you are just lucky. If you have a C API and you pass a function pointer, better always put one to a function declared with extern "C" like

    extern "C" void foo() { ... }
    

    Even though the function pointer type does not contain the linkage specification but rather looks like

    void(*)(void)
    

    The linkage is an integral part of the type - you just can't express it directly without a typedef:

    extern "C" typedef void(*extern_c_funptr_t)();
    

    The Comeau C++ compiler, in strict mode, will emit an error for example if you try to assign the address of the extern "C" function of above to a (void(*)()), beause this is a pointer to a function with C++ linkage.

    Franci Penov : To add to litb's answer, you should read about the calling conventions at Wikipedia - http://en.wikipedia.org/wiki/X86_calling_conventions. extern C implies cdecl calling convention; your compiler uses the same one for static member functions. Other compilers might as well choose any other.
    Helltone : @Comeau compiler, is it an error or warning that it emits ?
    Johannes Schaub - litb : Helltone, try it out http://www.comeaucomputing.com/tryitout/ it says: '"ComeauTest.c", line 4: error: a value of type "void (*)() C" cannot be used to initialize an entity of type "void (*)()"'
    Johannes Schaub - litb : however, it has a relaxxed mode in which it accepts the program. strict mode will try to adhere to the standard as much as possible.
  • Note, that extern C is the recommended way of C/C++ interoperability. Here is the master talking about it. To add to eduffy's answer: note that static functions and variables in the global namespace are deprecated. Use an anonymous namespace at least.

    Back to extern C: if you don't use extern C you will have to know the exact mangled name and use it. That is much more of a pain.

  • Most of what extern "C" does is largely compiler dependant. Many platforms change the name mangling and calling convention based off the declaration, but none of that is specified by the standard. Really the only thing the standard requires is that the code in the block is callable from C functions. As for your specific question, the standard says:

    Two function types with different language linkages are distinct types even if they are otherwise identical.

    This means extern "C" void proxy(int i) {} and /*extern "C++"*/void proxy(int i) {} have different types, and as a result pointers to these functions would have different types as well. The compiler doesn't fail your code for the same reason it wouldn't fail a great piece of work like:

    int *foo = (int*)50;
    makecontext(..., (void (*)(void)) foo, ...);
    

    This code might work on some platform, but that doesn't mean it will work on another platform (even if the compiler was fully standard compliant). You are taking advantage of how your particular platform works, which might be ok if you aren't concerned about writing portable code.

    As for static member functions, they aren't required to have a this pointer so the compiler is free to treat them as a non member function. Again, the behavior here is platform specific.

    Helltone : Good answer, but you totally miss the point of my question. My question concerns the difference between extern "C" function and a *static member* function.

How do you count the related rows within a query.

I am trying to make a query that pulls out all Tickets for a particular company. Within that table will be a column named [Repeat]

What I need the query to do is check to see if there are any other rows that have a matching Circuit_ID within the last 30days of that ticket.

"SELECT [MAIN_TICKET_ID], [CompID], [ActMTTR], [ActOTR], [DtCr], [DtRFC],
                CASE WHEN [PRIORITY] = 1 THEN '1' 
                     WHEN [PRIORITY] = 2 THEN '2' 
                     WHEN [PRIORITY] = 3 THEN '3' END AS [PRIORITY],
                CASE WHEN ([PRIORITY] = '1' AND [ActMTTR] >= '4' AND ([ResCd7] = 'Equipment (XX)' OR [ResCd7] = 'Lec Facilities (LEC)')) 
                       OR ([PRIORITY] = '1' AND [ActOTR] >= '14' AND ([ResCd7] = 'Equipment (XX)' OR [ResCd7] = 'Lec Facilities (LEC)')) 
                       OR ([PRIORITY] = '2' AND [ActMTTR] >= '6' AND ([ResCd7] = 'Equipment (XX)' OR [ResCd7] = 'Lec Facilities (LEC)')) 
                       OR ([PRIORITY] = '2' AND [ActOTR] >= '16' AND ([ResCd7] = 'Equipment (XX)' OR [ResCd7] = 'Lec Facilities (LEC)')) 
                       OR (([Rpt5] = '1' OR [Rpt30] = '1' OR [Chronic] = '1') AND ([ResCd7] = 'Equipment (XX)' OR [ResCd7] = 'Lec Facilities (LEC)')) THEN 'Yes' ELSE 'No' END AS [Measured],  
                CASE WHEN [Reviewed] = 1 THEN 'Yes' ELSE 'No' END AS [Reviewed],
                CASE WHEN [Rpt5] = 1 OR [Rpt30] = 1 THEN 'Yes' ELSE 'No' End As [Repeat],
                CASE WHEN [Chronic] = 1 THEN 'Yes' ELSE 'No' END AS [Chronic],
                CASE WHEN [ResCd7] = 'Equipment (XX)' THEN 'XX' 
                     WHEN [ResCd7] = 'Isolated to Customer (ITC)' THEN 'ITC' 
                     WHEN [ResCd7] = 'Information (INF)' THEN 'INF' 
                     WHEN [ResCd7] = 'Test OK (TOK)' THEN 'TOK' 
                     WHEN [ResCd7] = 'Lec Facilities (LEC)' THEN 'LEC' 
                     WHEN [ResCD7] = 'Dispatched No Trouble Found (NTF)' THEN 'NTF' 
                     WHEN [ResCD7] = 'Cleared While Testing (CWT)' THEN 'CWT' END AS [Resolution]
                FROM [SNA_Ticket_Detail] WHERE ([CompID] = @CompID)"

Above is the current query which relies on a 0 or 1 flag within the table. Seen on line CASE WHEN [Rpt5] = 1 OR [Rpt30] = 1 THEN 'Yes' ELSE 'No' End As [Repeat],

What I want to do is replace that with something along the lines of

CASE WHEN (SELECT COUNT([XX_CIRCUIT_ID]) FROM SNA_Ticket_Detail WHERE (CONVERT(CHAR(10), [DtRFC], 101) BETWEEN ([DtRFC] - 6) AND ([DtRFC])) AND (XX_CIRCUIT_ID = XX_CIRCUIT_ID)) > '1' THEN 'Yes' ELSE 'No' End As [Repeat],

This doesn't work.. It basically counts all rows instead of just the rows that match the current rows circuit id and fall within the last month.

I don't know how to code this properly to get it to work. or even if it's possible within the query.

From stackoverflow
  • You want to run a subquery against the same table, so you need to alias the two uses of the table (recent and td below). When looking for rows of interest, in this case ones with the same Circuit ID and within a certain range, you need to make sure you're not looking at the exact same row. Hence the check on [MAIN_TICKET_ID]. You could do a COUNT as per your example, or you can do an EXISTS() as below.

    SELECT [MAIN_TICKET_ID], [CompID], [ActMTTR], [ActOTR], [DtCr], [DtRFC],
                    CASE WHEN [PRIORITY] = 1 THEN '1' 
                         WHEN [PRIORITY] = 2 THEN '2' 
                         WHEN [PRIORITY] = 3 THEN '3' END AS [PRIORITY],
                    CASE WHEN ([PRIORITY] = '1' AND [ActMTTR] >= '4' AND ([ResCd7] = 'Equipment (XX)' OR [ResCd7] = 'Lec Facilities (LEC)')) 
                           OR ([PRIORITY] = '1' AND [ActOTR] >= '14' AND ([ResCd7] = 'Equipment (XX)' OR [ResCd7] = 'Lec Facilities (LEC)')) 
                           OR ([PRIORITY] = '2' AND [ActMTTR] >= '6' AND ([ResCd7] = 'Equipment (XX)' OR [ResCd7] = 'Lec Facilities (LEC)')) 
                           OR ([PRIORITY] = '2' AND [ActOTR] >= '16' AND ([ResCd7] = 'Equipment (XX)' OR [ResCd7] = 'Lec Facilities (LEC)')) 
                           OR (([Rpt5] = '1' OR [Rpt30] = '1' OR [Chronic] = '1') AND ([ResCd7] = 'Equipment (XX)' OR [ResCd7] = 'Lec Facilities (LEC)')) THEN 'Yes' ELSE 'No' END AS [Measured],  
                    CASE WHEN [Reviewed] = 1 THEN 'Yes' ELSE 'No' END AS [Reviewed],
                    CASE WHEN EXISTS ( select * from SNA_Ticket_Detail recent 
                  where recent.XX_CIRCUIT_ID = td.XX_CIRCUIT_ID
                  AND recent.[MAIN_TICKET_ID] <> td.[MAIN_TICKET_ID]
                  AND datediff( month, recent.[DtRFC], td.[DtRFC] ) < 1 
                 AND recent.[DtRFC] < td.[DtRFC]) 
          THEN 'Yes' ELSE 'No' End As [Repeat],
                    CASE WHEN [Chronic] = 1 THEN 'Yes' ELSE 'No' END AS [Chronic],
                    CASE WHEN [ResCd7] = 'Equipment (XX)' THEN 'XX' 
                         WHEN [ResCd7] = 'Isolated to Customer (ITC)' THEN 'ITC' 
                         WHEN [ResCd7] = 'Information (INF)' THEN 'INF' 
                         WHEN [ResCd7] = 'Test OK (TOK)' THEN 'TOK' 
                         WHEN [ResCd7] = 'Lec Facilities (LEC)' THEN 'LEC' 
                         WHEN [ResCD7] = 'Dispatched No Trouble Found (NTF)' THEN 'NTF' 
                         WHEN [ResCD7] = 'Cleared While Testing (CWT)' THEN 'CWT' END AS [Resolution]
                    FROM [SNA_Ticket_Detail] td WHERE ([CompID] = @CompID)
    

    You should check the datediff does what you want - just play with some test data. Also you probably want to ensure the 'recent' line isn't actually after the one being retrieved, so I've added:

    AND recent.[DtRFC] < td.[DtRFC]
    

    Although if you know your ticket ids are sequential you could do the same thing with them instead of the date field.

  • try

    SELECT(COUNT()...) > 1

    instead of

    SELECT(COUNT()...) > '1'

  • If you alias SNA_Ticket_Detail (e.g SNA_Ticket_Detail SNA) in the outer query, you can reference that in the subquery

    for simplicity also alias SNA_Tick_Detail in the subQuery (SNA_Ticket_Detail SNA_sub)

    Then, where you currently have (XX_CIRCUIT_ID = XX_CIRCUIT_ID) this would change to (SNA_sub.XX_CIRCUIT_ID = SNA.XX_CIRCUIT_ID)

  • For the minimum change to your SQL, change your sub-query like this:

    SELECT ... FROM SNA_Ticket_Detail AS i WHERE ... AND i.XX_CIRCUIT_ID = [SNA_Ticket_Detail].[XX_CIRCUIT_ID]
    

    You must somehow reference the outer table, you can't just compare XX_CIRCUIT_ID and XX_CIRCUIT_ID for equality - this will comparison always be true. ;-)

    Instead, you must compare the outer XX_CIRCUIT_ID, referenced as [SNA_Ticket_Detail].[XX_CIRCUIT_ID] to the inner XX_CIRCUIT_ID, referenced as i.XX_CIRCUIT_ID for clarity.

  • That is one messy query...

    You'll need to filter your "inner" query (a.k.a. subquery) to the outer query. The concept is generally called a correlated subquery. See Rory's solution.

    Or, you could use a JOIN to a view (or derived table) that contains aggregated data. This is the best choice, generally, for performance.

Showing tab with lots of ComboBox controls is slow with WinForms

I have set up a dialog with several tabs. One of these contains twenty combo boxes, each with over 100 items, added like this :

foreach (var x in collection)
{
    string text = FormatItem (x);
    combo.Items.Add (text);
}

so there is nothing fancy at all with the items. They are plain strings and the combo boxes get filled when the dialog is created. This happens almost instantenously.

However, when the user clicks on the tab containing all these combo boxes for the very first time, the GUI freezes for several seconds (and I am running on a really beefy machine).

I loaded the symbols for System.Windows.Forms and tried to break into the debugger while the program is stuck. What I have discovered is a stack trace with the following calls:

System.Windows.Forms.Control.CreateHandle()
System.Windows.Forms.ComboBox.CreateHandle()
System.Windows.Forms.Control.CreateControl(...) x 3
System.Windows.Forms.Control.SetVisibleCore(true)
System.Windows.Forms.TabPage.Visible.set(true)

which results in plenty of native transitions, WndProc calls, etc. I suppose this happens for every single item in every combo box. Phew.

Obviously, I cannot optimize WinForms. But maybe I can take some actions in order to avoid all this hell getting lose on my poor GUI? Any ideas?

Nota bene:

  1. I've no event handlers attached on the combo boxes which could be called when the controls get created for real.

  2. If I try to access the Handle property of the combo boxes just after having created and populated the form, I pay the penalty at that moment, rather than when the tab becomes visible for the first time. But having to wait several seconds when creating the form is not acceptable either. I really want to get rid of the long set up time.

  3. The idea of applying BeginUpdate and EndUpdate does not apply here: these should be used to prevent the control from repainting when its items list gets filled. But in my case, the problem happens well after the control has been set up.

From stackoverflow
  • What you're saying is not consistent with anything I ever observed... :s

    But have you tried using .BeginUpdate / .EndUpdate ?

    Another thing you coud try is not populate the boxes until needed. Delay it until the box gets focus for example... (If you trap the dropdown event some user might be annoyed that the up/down arrow keys won't work.)

    Pierre : Yeah, postponing the filling of the controls' items could work... but it requires quite a bit of tweaking, since I then cannot use the `SelectedIndex` and other such properties.
  • Instead of iteration your collections, would't setting the ComboBox.DataSource be a viable, and much faster alternative?

    comboBox1.DataSource = myCollection1;
    comboBox2.DataSource = myCollection2;
    comboBox3.DataSource = myCollection3;
    // and so on...
    

    Here is a more complete example:

    public class Entity
        {
            public string Title { get; set; }
            public override string ToString()
            {
                return Title;
            }
        }
        public partial class Form1 : Form
        {
            public Form1()
            {
                InitializeComponent();
    
                List<Entity> list = new List<Entity>
                                        {
                                            new Entity {Title = "Item1"},
                                            new Entity {Title = "Item2"},
                                            new Entity {Title = "Item3"}
                                        };
    
                comboBox1.DataSource = list;
    
            }
    
  • Lots of controls on a form can be a problem. I once had a form that dynamically created between 50-100 textbox controls. It was slow to load.

    We solved that problem by using a datagrid instead. It's a control that is optimized for lots of data. I don't know what your exact requirements are, but it might work.

  • Everything I tried failed up to now to speed up the first display of the tab containing all the combo boxes. Data binding didn't help either.

    I finally decided to fix the issue by doing a trick, similar to what danbystrom proposed, i.e. only populate the Items collection when the focus first arrives on a combo. This still produces a noticeable delay, the time all items get created (within a BeginUpdate and EndUpdate pair of method calls), but it is tolerable (approx. 200ms versus several seconds in my original scenario).

Should I switch from vim to emacs, and if so any suggestions?

First of all, I love vim. I have been using vim for a few years now and it has changed my perceptions of what an editor is capable of. I also love modes.

However I am thinking of switching and here is why:

I really like Xcode and it has emacs bindings. I use Mac OS X and it has emacs functionality in all text input fields. I hate using the mouse, and I think switching would make sense.

I'm also kind of tired of doing all my work in terminal windows: with Xcode I would have access to a great IDE, and a great rat-free editor (emacs).

Having said all this, I find emacs Meta keys approach a little uncomfortable, and a little daunting, plus I have spent a lot of time learning to do things in vim.

Should I switch? And if so has anyone else made this switch or used both vim and emacs and have an opinion about which they prefer and why?

Also for emacs users out there who have used Xcode, how good is the emacs implementation in Xcode? Worth the switch?

From stackoverflow
  • No. Once vimer is forever vimer.

  • This may be easier: Vi Input Manager Plugin (works in Xcode)

  • Keep in mind emacs has viper-mode which allows you to continue to use your current vi key bindings. As for the meta-key thing, I mapped my caps-lock to ctrl and my right-shift to alt and have been pretty happy with it.

    projecktzero : Unfortunately, viper-mode is not VIM. It's just vi key bindings. There's another addon for emacs vimpulse that supposedly is closer to VIM than vi.
  • The only reason I would switch to emacs (and I try every once in a while) is for the wonderful elisp packages -- specifically all the special editing modes and console/debugger modes.

    I have never used Xcode but I assume from your question that these are emacs key-bindings only. I.e., you can't use any emacs package with Xcode.

    So no. I'd say don't switch to emacs. If you want to use Xcode either learn the key bindings or use some Xcode vi mode (as has been suggested by others).

  • Why can't you use both? Using Emacs doesn't mean you have to stop using Vim.

    Benefits of learning and using both:

    • Sometimes one editor is better than the other for some particular language. Better syntax highlighting, better indentation rules, better macros, better integration with external tools, or whatever. You will always be using the best editor for the job if you're able to choose between Vim and Emacs.
    • Sometimes a community gravitates toward one editor or the other. You can participate in the community no matter which is used.
    • You can learn neat tricks from one editor and port them to the other. Both editors have features that you'd never think of if you stuck to one exclusively.
    • If some computer only has one or the other set up, you can still use it effectively.
    • Learning is fun.
    • Geek cred.

    Disadvantages:

    • Learning takes time and effort.
    • Fragmentation and duplication of effort (all of your favorite tricks and keymappings have to be written twice, in Vim script and elisp).

    I see the advantages outweighing the disadvantages.

    I use Emacs for Lisp code, and because I use Lisp for web development, I also stay in Emacs for CSS and Javascript and such. I use Vim for Ruby, Python, Perl, PHP, BASH, plaintext, and almost everything else. Both editors have strengths and weaknesses, but mostly both have strengths.

    The only way to know if you like Vim or Emacs better is to invest time in both, so you can make an informed decision. Maybe you'll find that Emacs fits the way you think and work better, or maybe not. It's such a subjective thing that the only way to know is by trying.

    Text editors aren't religions, and the free disk space in your brain is for all intents and purposes unlimited. So use both and enjoy.

  • There is not much Emacs support in Mac OS X and Xcode. It is mostly a few keystrokes.

    Mac OS X comes with a terminal version of Emacs. A very good Emacs for Mac OS X is Aquamacs.

Quick way to return a list of custom objects from a page method w/o a separate BLL

I am using jQuery to retrieve a JSON object from a page method. I have a DAL which uses SubSonic and if I return objects created from SubSonic-generated classes I will clog up the pipes. :) You know, all public properties get serialized. I don't want a separate business layer for this application, because it's small and focused on read operations and yet another layer seems like an overkill. To avoid downloading some SubSonic bloated objects (possibly with sensitive information as well) and avoid building a separate layer I tried returning a list of objects, like this:

[WebMethod]
public static List<object> GetFiles()
{
    FileCollection collection = DB
        .Select()
        .From(DataAccess.File.Schema)
        .ExecuteAsCollection<FileCollection>();

    List<object> files = new List<object>(collection.Count);

    foreach (DataAccess.File file in collection)
    {
        files.Add(new {
                          file.FileId,
                          file.ApplicantFirstName,
                          file.ApplicantLastName,
                          file.UploadDate
                      }
        );
    }

    return files;
}

It works and I get a nice JSON object in return (disregard the DateTime value):

[{"FileId":1,"ApplicantFirstName":"Paweł","ApplicantLastName":"Krakowiak","UploadDate":"\/Date(1235656448387
)\/"}]

Is this a good approach? I am concerned about List<object> - is it worse than returning say a List<SomeDomainObject>? Performance? Something else?

This is .NET 2.0, I can't use 3.5 features. At least anonymous types work...

From stackoverflow
  • The biggest recommendation might be to make it a "Collection" rather than a List, but with a simple webservice return, it isn't as big of a deal, as that recommendation is most typically in environments where the object still lives in a .NET assembly.

    I think it is easy to read as well.

  • The only downside to using List<object> instead of List<SomeDomainObject> in this scenario would be losing strongly-typed access when calling your GetFiles method directly from .net code.

    Pawel Krakowiak : Thanks. I don't need to use that method in .NET anywhere, it's sole purpose is to get some data back to the client, hence it looks like List will be fine.
  • Looks like there's nothing wrong with my approach. All that I want to do is to return a JSON object to the calling client (browser) to update the UI. This application does 99% read operations, so I am fine with it. I actually started adding a Services and Domain (I keep my business entities here) layers, but I'm going to throw them away. I really try to keep it simple for this application and don't add stuff I don't need.

  • How do I write a HQL Query for this?

    I want a single HQL query that returns all groups containing a given user that where created before a given date. Somehow I can't get it right.

    public class Group
    {
        @ManyToMany
        Set<User> users;
        Date created;
    }
    
    public class User
    {
    ...
    }
    
    From stackoverflow
    • II-Bhima's answer is essentially right - here is a little fix:

      select g from Group as g
      inner join g.users as user
      where g.created < :createdDate
      and user = :user
      

      you need it so that Groups are returned and not Object[] with Group-User 2-tuples.

    Passing a stored procedure call from a LINQ data context to another method. C#

    Hi. I feel the answer to this may lie with delegates, but I am having a hard time grasping the concept of delegates. The main problem is that every explanation and example of delegates I have ever read are always round about ways of doing something you could accomplish without delegates so to me it does not teach me anything. I learn best by seeing real world examples.

    Now that that is out of the way, here is what I want to accomplish. I have a Data Context (.dbml) with numerous stored procedures. I also have mutliple situations where I am using the exact same 20 lines of code to update one column in a table, but the only difference other than using a different datagrid, is the stored procedure being called. In an effort of reducing the amount of code used, I was hoping for a way to pass the stored procedure call from the data context object as a parameter. That way I can move all that code to one reusable function. Is this even possible? I am using Visual Studio 2008 and C#.

    Thanks for any guidance.

    From stackoverflow
    • While I can't help you with the sql / stored proc side of things, I can try explain delegates, at least from the C# point of view.

      While normally you declare functions as being part of a class (and hence they are strongly attached to the class), sometimes you want to put them in a variable. Once you do this, you can then pass it around, much like you would with any other variable.

      So we know that a string is the kind of variable that you stick text into. Following that, a delegate is the kind of variable that you stick functions into. This however is very confusing, as C# isn't consistent or clear with how it names things in your code. Observe:

      public void WriteText() {
        Console.WriteLine("Hello");
      }
      
      ...
      Action x = WriteText;
      x(); // will invoke the WriteText function
      

      Note we're using "Action" where logic would imply the code should read delegate x = WriteText. The reason we need this extra mess is because "delegate" itself is like System.Object. It doesn't contain any information, and it's kind of the "base class" behind everything. If we actually want to use one, we have to attach some Type information. This is where Action comes in. The definition of Action is as follows:

      public delegate void Action();
      

      What this code says is "we're declaring a new delegate called Action, and it takes no parameters and returns void". Thereafter if you have any functions which also take no parameters and return void, you put them in variables of type Action.

      Now, you can stick a normal function into a delegate, but you can also stick an "anonymous" function into a delegate. An "anonymous" function is something that you declare inline, so rather than attaching the already-declared WriteText function, we could build a new one up in the middle of our code like this:

      Action x = () => { Console.WriteLine("Hello"); };
      x(); // invoke our anonymous function.
      

      What this is doing is using the C# "lambda syntax" to declare a new anonymous function. The code that runs as part of the function (when we invoke it) is the Console.WriteLine.

      SO

      To put it all together, you could have a "SaveData" function, and pass it a delegate. It could do it's 20 lines of table building, then pass that table to the delegate, and the delegate could invoke the appropriate stored-proc. Here's a simple example:

      public void SaveData(Action<Table> saveFunc){
          var t = new Table();
          ... 20 lines of code which put stuff into t ...
          saveFunc(t);
      }
      
      SaveData( t => StoredProc1.Invoke(t) ); // save using StoredProc1
      SaveData( t => StoredProc37.Invoke(t) ); // save using StoredProc37
      

      SO

      Having said ALL OF THAT. This isn't how I'd actually solve the problem. Rather than passing the delegate into your savedata function, it would make more sense to have your SaveData function simply return the table, and then you could then invoke the appropriate StoredProc without needing delegates at all

      Ziltoid : Thank you very much. That helped a lot. I got it figured out now
      Orion Edwards : Glad my giant pile of writing was useful :-)