Tuesday, May 3, 2011

Can you make http client connections from a web app(flash, java)?

Before I jump both feet in and waste my time learning something that I can't use for my needs, I have to ask:

Can I make a http connection to another website from within a java applet or a flash file?

Say I have a java applet that wants to load results from google. Will the JVM give me access to such a call? What about Flash? I've read about URLLoader but am confused with the this:

" * Data loading is not allowed if the calling file is in the local-with-file-system sandbox and the target resource is from a network sandbox. * Data loading is also not allowed if the calling file is from a network sandbox and the target resource is local."

Anyway if it isn't possible, will it fail on the user silently or will it ask for permission?

Thanks a lot.

From stackoverflow
  • Of course you can do that in Java, and also flash. However some browsers and environments may restrict by forcing security levels.

    The warnings you found were related to local<->remote. For web applications which is hosted on network, you can usually access other network resources. (well, some may restrict you for "other" domains - you'll need to check the security models)

    But modern technology usually suggest you to do that with the combination of JavaScript. Google for "Ajax" and search for some frameworks that best fits your requirement - that would save a lot of time.

  • Yes, but the problem is that for security reasons, many browsers only allow the application to connect to the domain from which the application came from.

    so for example, if I go to website A and my app is trying to make an access to website B, it could sometimes be blocked (e.g., to avoid spamming, attacks, etc.). A work-around, if you control website A, is to create a "pass-through" script on website A that will send the request to B.

  • Can I make a http connection to another website from within a java applet or a flash file?

    From Flash, yes. You do need to read up on the Flash Security Model to be sure what you can and what you cannot ask. Most of the time this is to stop unauthorized access and/or XSS or similar attacks. Flex (a related technology), for example, does not give you access to your disk, whereas AIR does. Take your pick.

    As for applets, you'll have to wait for someone else to explain it to you. But AFAIK, it also has a security model to pose the least threat to its users.

  • Not with ease. By definitions Java sandbox won't let your applet call other than the origin it came from. You will have to sign your applet properly. Then users will either trust your applet and let it call elsewhere, or deny it - it's up to user. You can self-sign your applet, but I would personally never allow such thingy on my computer. It's good for testing and stuff like that. So you'd probably need to buy a certificate from reputable source. Same to the Flash, I believe the idea is identical.

  • Java, JavaScript, Flash, etc., implement some form of "same origin" policy which only allows untrusted code to read from the site it was downloaded from. One way around this, supported to some extent by recent versions of Flash and Java since 6u10 is crossdomain.xml (Google it). This allows sites to permit access via code downloaded by other sites. Note, this requires the site you want to access to grant you permission to do so.

Do you expect query string parameter names to be case sensitive?

Silverlight is case sensitive for query string parameters so the following code would return false with "callid=5"

string callId;
if (System.Windows.Browser.HtmlPage.Document.QueryString.TryGetValue("callId", out callId))
{
....
}

Microsoft defends the decision by citing the www.w3.org spec, but I think it leads to a less friendly experience for people trying to link to you, or give a URL over the phone.

Looks like Stackoverflow is case insensitive: http://stackoverflow.com/search?q=silverlight+bug http://stackoverflow.com/search?Q=silverlight+bug

From stackoverflow
  • Yes, I'm used to it being case sensitive, and therefore have been programming to it for a long time. I know of some people that have implemented methods to do intermediate parsing to convert them all to lowercase, or other things server side, and it really depends on what you are working with specifically.

    As for usability, yes it is harder to read. BUT, at the same time a URL over the phone that has a querystring is not easy to give out anyway.

  • I think you should focus on your naming conventions rather than the implementations of standards, making sure to avoid similar field names and mixed case. For example, you can use a convention of words that over the phone can be read out stating "all lowercase" or "all uppercase".

Ruby: How do I use symbols to represent things in an array?

I have an array of arrays that looks like this:

fruits_and_calories = [
  ["apple", 100],
  ["banana", 200],
  ["kumquat", 225],
  ["orange", 90]
]

I also have a method I want to invoke on each element of the array:

fruits_and_calories.each do |f| eat(f[0], f[1])

I'd really like to be able to say something like:

fruits_and_calories.each do |f| eat(f[:name], f[:calories])

Is there a way that I can pull this off without having to change each item in the array (for example, by iterating through it and somehow adding the symbols in)? Or, if that's too hard, is there a better alternative?

From stackoverflow
  • An array is always indexed by numbers, so as far as I know using the standard array it's not possible.

    Personally I'd just opt for using a comment above the code to hint what f[0] and f[1] stands for.

    But if you are hell bent on doing it I guess some duck-typing on the Array class works:

    
    class Array
      def name ; self[0] ; end
      def calories ; self[1] ; end
    end
    
    # then call it with:
    fruits_and_calories.each {|f| eat(f.name, f.calories) }
    
    tomafro : Please don't do this! Adding methods to all array instances might be useful the first time, but that way madness lies...
    ba : I agree completely. :)
  • Is there some reason it must be an array, per se? That seems to cry out to be a hash, or a class for Fruit.

    Kyle Kaitan : No, there's no reason it has to be an array. But this is used in a one-off fashion in a single method and nowhere else, so a class seems like using a nuclear-powered rotary saw where I only wanted a hammer.
    Kyle Kaitan : If I could use a hash without having to specify "this one is the name, this one is the calories" for every single element, that would be an excellent alternative too. Can that be done?
    Charlie Martin : Yes, see Peslo's answer. You sould also use "apple" => 100 of course.
    Walt Gordon Jones : ("so a class seems like using a nuclear-powered rotary saw...") Kyle, are you saying Hash is a class and Array is not? In Ruby they are both classes.
    Charlie Martin : He means "write a new class." you could write a class with :attr fruit, calories and then make an array of them.
  • The best answer is not to use an Array at all, but to use a Hash:

    fruits_and_calories = { :apple => 100,
                            :banana => 200,
                            :kumquat => 225,
                            :orange => 90}
    fruits_and_calories.each do |name, calories| 
      eat(name, calories)
    end
    
    glenn mcdonald : Actually, the key is the "|name, calories|" part, which works the same with his original array of arrays...
    Pesto : Yes, an array would work, but the design is begging for a hash.
  • Pesto's answer (use a hash) is a good one, but I think I'd prefer to use a Struct.

    Fruit = Struct.new(:name, :calories)
    
    fruits = [
      Fruit.new("apple", 100),
      Fruit.new("banana", 200),
      Fruit.new("kumquat", 225),
      Fruit.new("orange", 90)
    ]
    
    fruits.each {|f| eat(f.name, f.calories)}
    

    This also lends itself to changing eat from taking both the name and calories, to taking a fruit instance:

    fruits.each {|f| eat(f)}
    
  • Without changing the data structure at all, you could change the block arguments to achieve the same thing:

    fruits_and_calories.each  do |name, calories|  eat(name, calories);  end
    

    This works because Ruby will auto-expand the inner arrays (["apple", 100], etc) into the argument list for the block ('do |name, calories| ... end'). This is a trick that Ruby inherited from Lisp, known as 'destructuring arguments'.

Access db loop - for each record in one table create array of records in another table

Is it possible to create a nested looping query in Access DB that will update a third table?

I have a master (header) table:

------------------------
masters
------------------------
num | modality | cost  |
------------------------
01  | thing    | 23.00 |
02  | thing    | 42.00 |
03  | thing    | 56.00 |
04  | apple    | 11.00 |
05  | apple    | 17.00 |

and a temporary table containing detail info that I'll need to create a third (actual) details table which will key off of the masters table

here's a sample of the temp details table.

----------------------------------
temps
----------------------------------
modelnumber | modality | priceEa |
----------------------------------
| 123       | thing    | 1.00    |
| 234       | apple    | 2.00    |
| 345       | apple    | 3.00    |
| 456       | apple    | 4.00    |
| 567       | thing    | 5.00    |

Basically, I need to loop through every record in the masters table.

Outer loop:

For each record in the masters table, grab the modality.

Inner loop:

Then for each record in the temps table, where the modalities match, create a record in the details table (and in the process, do some calculations based on temps.priceEa and masters.cost).

This should create (masters * temps) number of new records in the details table for every record in the masters table.

the details table, should end up looking like

----------------------------------------------------------
details
----------------------------------------------------------
num  | modelnumber | modality | priceEa  |  adjustedCost |
----------------------------------------------------------
| 01 | 123         | thing     | 1.00    | (do calc here)
| 01 | 567         | thing     | 5.00    | (do calc here)
| 02 | 123         | thing     | 1.00    | (do calc here)
| 02 | 567         | thing     | 5.00    | (do calc here)
| 03 | 123         | thing     | 1.00    | (do calc here)
| 03 | 567         | thing     | 5.00    | (do calc here)
| 04 | 234         | apple     | 2.00    | (do calc here)
| 04 | 345         | apple     | 3.00    | (do calc here)
| 04 | 456         | apple     | 4.00    | (do calc here)
| 05 | 234         | apple     | 2.00    | (do calc here)
| 05 | 345         | apple     | 3.00    | (do calc here)
| 05 | 456         | apple     | 4.00    | (do calc here)
...etc
From stackoverflow
  • 
    SELECT m.num, t.modelnumber, m.modality, t.priceea
    into myNewTempTable
    from masters m  inner  join temp t on m.modality = t.modality
    order by m.num, t.modelnumber
    
    
    42 : I think you didz it! It's not funny how sometimes sql performs looping updates/inserts without really looking like it's going to do it... it makes my brain think the problem through the wrong way. Thank you.
    onedaywhen : Erm, SQL does not perform looping conceptually. Perhaps it would help if you think in terms of a SQL updating/inserting all rows affected rows all at once.

How do you re-populate dynamically generated options for a select field?

Background I have two select form fields chained together: duration and frequency. When the user selects a frequency, the duration options are dynamically inserted. There are default options, but those are just so that the field isn't empty when the user expands it.

For example, the frequency options are "day", "other day", and "week". If I select "day", the frequency options change to "5 days", "15 days", and "30 days".

Problem The problem comes when the user submits the form with errors, the form is returned with all the form fields re-populated and the errors highlighted -- except for the Frequency select field -- whose options are dynamically generated. It is not highlighted and it's options are the default options.

Is there a way that I can have these options re-populated if the user submits with an error. We are doing quite a bit of JavaScript validation, so this situation shouldn't happen that often, but would like to make getting an error as painless an experience as possible for the users.

Code I'm using jquery and a jquery plugin called cascade to chain the two fields together. (http://plugins.jquery.com/project/cascade)

Here's my custom JavaScript.

This script generates the list of options:

var list1 = [
    {'When':'86400','Value':' ','Text':' '},
    {'When':'172800','Value':' ','Text':' '},
    {'When':'604800','Value':' ','Text':' '},
    {'When':'86400','Value':'432000','Text':'5 days'},
    {'When':'86400','Value':'1296000','Text':'15 days'},
    {'When':'86400','Value':'2505600','Text':'30 days'},
    {'When':'172800','Value':'1296000','Text':'15 days'},
    {'When':'172800','Value':'2505600','Text':'30 days'},
    {'When':'172800','Value':'3888000','Text':'45 days'},
    {'When':'604800','Value':'2505600','Text':'4 weeks'},
    {'When':'604800','Value':'3715200','Text':'6 weeks'},
    {'When':'604800','Value':'4924800','Text':'8 weeks'}
];

function commonTemplate(item) {
    return "<option value='" + item.Value + "'>" + item.Text + "</option>"; 
};

function commonMatch(selectedValue) {
    return this.When == selectedValue; 
};

And this is the script that triggers the generation of the select options:

jQuery("#duration).cascade("#frequency",{
    list: list1,      
    template: commonTemplate,
    match: commonMatch    
})

The Question Any thoughts on how to get the dynamically generated frequency options to re-populate when the form is returned to the browser with errors? Could either use the cascade plugin I'm currently using or some other method?

Help is muchly appreciated. :-)

From stackoverflow
  • I am not familiar with this plugin, but couldn't you just fire the change() event of #duration and/or #frequency on document.ready?

    $(document).ready(function() {
        $('#duration').change();
        $('#frequency').change();
    });
    

    I am pretty sure all the script is doing is binding stuff to the change event of the select (by default, at least) so that should trigger the plugin to work its magic...

    Rick : This sounds like it will work and am working on it. But, the problem now seems to be that frequency, which is not dynamically generated, is not being re-populated. :( Am troubleshooting that and then will see if your solution works.
    Rick : That did it. Thanks, Paolo. :)
    Rick : Though, one final note, I only needed to .change() the #frequency.
    Paolo Bergantino : Yeah, I thought it'd only be the frequency, I just wasn't really sure of the setup so I threw both in there for good measure. I figured you'd sort it out from there. Glad it helped.

Reflection: for frameworks only?

Somebody that I work with and respect once remarked to me that there shouldn't be any need for the use of reflection in application code and that it should only be used in frameworks. He was speaking from a J2EE background and my professional experience of that platform does generally bear that out; although I have written reflective application code using Java once or twice.

My experience of Ruby on Rails is radically different, because Ruby pretty much encourages you to write dynamic code. Much of what Rails gives you simply wouldn't be possible without reflection and metaprogramming and many of the same techniques are equally as applicable and useful to your application code.

  • Do you agree with the viewpoint that reflection is for frameworks only? I'd be interested to hear your opinions and experiences.
From stackoverflow
  • I disagree, my application uses reflection to dynamically create providers. I might also use reflection to control logic flow, if the logic is simple and doesn't warrant a more complicated pattern.

    In C# I use reflection to grab attributes off Enumeration which help me determine how to display an enumeration to an end user.

    M. Jahedbozorgan : Are you creating a framework or something?! ;)
    JoshBerke : Nope not a framework
  • I disagree, reflection is very useful in application code and I find myself using it quite often. Most recently, I had to use reflection to load an assembly (in order to investigate its public types) from just the path of the assembly.

    Several opinions on this subject are expressed here...

    http://stackoverflow.com/questions/37628/what-is-reflection-and-why-is-it-useful

  • Use reflection when there is no other way! This is a matter of performance!

    If you have looked into .NET performance pitfalls before, it might not surprise you how slow the normal reflection is: a simple test with repeated access to an int property proved to be ~1000 times slower using reflection compared to the direct access to the property (comparing the average of the median 80% of the measured times).

    See this: .NET reflection - performance

    MSDN has a pretty nice article about When Should You Use Reflection?

  • If your problem is best solved by using reflection, you should use it.

    (Note that the definition of 'best' is something learnt by experience :)

    The definition of framework vs. application isn't all that black & white either. Sometimes your app needs a bit of framework to do its job well.

  • There's the old joke that any sufficiently sophisticated system written in a statically-typed language contains an incomplete, inferior implementation of Lisp.

    Since your requirements tend to become more complicated as a project evolves, you often eventually find that the common idioms in statically-typed object systems eventually hit a wall. Sometimes reaching for reflection is the best solution.

    I'm happy in dynamically-typed languages like Ruby, and statically-typed languages like C#, but the implicit reflection in Ruby often makes for simpler, easier-to-read code. (Depending on the metaprogramming magic required, sometimes harder to write).

    In C#, I've found problems that couldn't be solved without reflection, because of information I didn't have until runtime. One example: When trying to manipulate some third-party code that generated proxies to Silverlight objects running in another process, I had to use reflection to invoke a specific strongly-typed "Generic" version of a method, because the marshalling required the caller to make an assumption about the type of the object in the other process was in order to extract the data we needed from it, and C# doesn't allow the "type" of the generic method invocation to be specified at run time (except with reflection techniques). I guess you could argue our tool was kind of a framework, but I could easily imagine a case in an ordinary application facing a similar problem.

    JMM : that would be Greenspun's 10th rule "Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp"
    JasonTrue : I knew it went something like that :)
  • Reflection makes DRY a lot easier. It's certainly possible to write DRY code without reflection, but it's often much more verbose.

    If some piece of information is encoded in my program in one way, why wouldn't I use reflection to get at it, if that's the easiest way?

    It sounds like he's talking about Java specifically. And in that case, he's just citing a special case of this: in Java, reflection is so wonky it's almost never the easiest way to do something. :-) In other languages like Ruby, as you've seen, it often is.

    John Topley : Agreed. Writing reflection code in Java is a PITA!
  • Reflection is definitely heavily used in frameworks, but when used correctly can help simplify code in applications.

    One example I've seen before is using a JDK Proxy of a large interface (20+ methods) to wrap (i.e. delegate to) a specific implementation. Only a couple of methods were overridden using a InvocationHandler, the rest of the methods were invoked via reflection.

    Reflection can be useful, but it is slower that doing a regular method call. See this reflection comparison.

  • Reflection in Java is generally not necessary. It may be the quickest way to solve a certain problem, but I would rather work out the underlying problem that causes you to think it's necessary in app code. I believe this because it frequently pushes errors from compile time to run time, which is always a Bad Thing for large enough software that testing is non-trivial.

  • I think the observation that there shouldn't be any need for the use of reflection in application code and that it should only be used in frameworks is more or less true.

    On the spectrum of how coupled some piece of code are, code joined by reflection are as loosely coupled as they come.

    As such, the code which is doing it's job via reflection can quite happily fulfil it's role in life knowing not-a-thing about the code which is using it.

Is it bad to use class variables in a multithreaded app?

I inherited a project that needs to be mutithreaded. There is three major classes that get used in the worker threads.

BASE CLASS - has an class level SqlDataAdapter and DataTable. INHERITED CLASS ONE - Uses the inherited SqlDataAdapter and DataTable. INHERITED CLASS TWO - Uses the inherited SqlDataAdapter and DataTable.

Every thing seams to work, but I only have two users testing at the same time.

Is having the SqlDataAdapter and DataTable be class level variables a bad idea?

Update Sorry it's a SqlDataAdapter not a SqlTableAdapter. The language is C#. The SqlDataAdapter and DataTable are from the System.Data.SqlClient namespace.

Here is some of the base class:

public abstract class BaseSync
{
    #region Variables
    internal SqlDataAdapter stageDataAdapter;
    internal DataTable stageDataTable;
    #endregion //Variables
}

Part Two

There is also a singleton utility class that all the derived classes use. I don't know if it will cause problems or not. It looks like this:

public class Utility
{ 
    private static readonly Utility _utility= new Utility();

    private Utility()
    { }

    public static Utility GetUtility()
    {
        return _utility;
    }

    public int GetAutoNumber(string tablename, string fieldname, string siteId)
    {
        string _tablename = tablename;
        string _fieldname = fieldname;
        ...
    }

    internal MissingInfo NormalizeRow(DataRow dataRow)
    {

        MissingInfo retVal = MissingInfo.None;

        //Num
        if (dataRow["Num"] == DBNull.Value)
        {
           retVal =MissingInfo.Num;
           dataRow["Num"] = 1;
        }
        ...
    }
}
From stackoverflow
  • This depends on the access level of the objects. As long as they are not Static (Shared in VB.NET). You should be fine having them in the object, as long as each thread, has its own instance of the object.

    Where you come into interesting situations are with the static members that are shared across all instances.

    So the long and short of it is that we would need to see the code.

  • Having variables modified by different threads without syncronization is always a really bad idea.

    You don't mention if this is the case though. If you thread, you need to plan and check what you're doing.

  • you should always consider doing synchronization when sharing non-constant objects in multi-thread, otherwise you'll end screwed up someday...

    so, it's OK if you want to make it a class variable, but remember to make some lock mechanism for it.

  • The rule about variables is that the more places they could potentially change from, the more chances there are of race conditions, especially if the application evolves.

    There isn't much informtion in your question so it is difficult to provide a specific answer. Class-level variables (if public) can often be treated like global variables and are thus accessible from everywhere, raising the risk of corruption.

    A possible approach might be to hide these fields and at provide access via class-level functions. You can then do more things because you've created specific points of access to these variables. You would need to be careful to ensure that you are never giving applications a direct and mutable references to that object, which may require some rewriting, but it would make your program safer.

Looking for a PHP Image Library... rounded corners & resizing

Just looking for a good PHP Image Library, I want to display images with rounded corners, do some resizing, and blur some other pictures either on the fly, or on upload.

From stackoverflow
  • Have a go with http://wideimage.sourceforge.net/wiki/MainPage

    It doesn't do it out of the box but you could have a partially transparent PNG that you could put on top of your original image, making it blurry.

  • I'd suggest to have a look around ImageMagick.

    There are excellent wrappers for the library in PHP too: http://www.imagemagick.org/script/api.php#php

  • this is a dirty hack i did for a project a while ago. it applies an grayscale image as a transparency map to another image (black is transparent, white opaque. scaling the map to the images proportions is supported). you could create a fitting rounded-corners transparency map (including antialiasing, whoo!).

    it's slow because it's pure php, but i always cache the results anyway.

    $image and $transparencyMap are gd image ressources, so you have to imagecreatefromxyz them yourself.

    <?php
    function applyTransparencyMap($image, $transparencyMap) {
        if (!function_exists('extractrgb')) {
            function extractrgb($rgb) {
                $a = ($rgb >> 24) & 0xFF;
                $r = ($rgb >> 16) & 0xFF;
                $g = ($rgb >> 8) & 0xFF;
                $b = $rgb & 0xFF;
                return array($r, $g, $b, $a);
            }
        }
    
        $sx = imagesx($image);
        $sy = imagesy($image);
        $tx = imagesx($transparencyMap);
        $ty = imagesy($transparencyMap);
        $dx = $tx / $sx;
        $dy = $ty / $sy;
    
        $dimg = imagecreatetransparent(imagesx($image), imagesy($image));
    
        for ($y = 0; $y<imagesy($image); $y++) {
            for ($x = 0; $x<imagesx($image); $x++) {
                $intcolor                = imagecolorat($image, $x, $y);
                $intalpha                = imagecolorat($transparencyMap, floor($x*$dx), floor($y*$dy-1));
                list($tr, $tg, $tb, $ta) = extractrgb($intalpha);
                $alphaval                = 127-floor(($tr+$tg+$tb)/6);
                list($r, $g, $b, $a)     = extractrgb($intcolor);
                $targetAlpha             = max(0, min(127,$alphaval+$a));
                $sct                     = imagecolorallocatealpha($image, $r, $g, $b, $targetAlpha);
                imagesetpixel($dimg, $x, $y, $sct);
            }
        }
    
        return $dimg;
    }
    ?>
    

    on the other hand, better use wideimage, as apikot suggested. does the same and more.

  • You can try with this library http://freelogic.pl/thumbnailer/examples

How do I detect which kind of JRE is installed -- 32bit vs. 64bit

During installation with an NSIS installer, I need to check which JRE (32bit vs 64bit) is installed on a system. I already know that I can check a system property "sun.arch.data.model", but this is Sun-specific. I'm wondering if there is a standard solution for this.

From stackoverflow
  • On linux, my (java) vm reports java.vm.name=Java HotSpot(TM) 64-Bit Server VM. The javadocs for System declare that System.getProperty will always have a value for this but are silent on sun.arch.data.model.

    Unfortunately they don't specify what the system property will be so some other JVM might just report java.vm.name=Edgar.

    BTW, by "installed on the system", I assume you mean "the current running JVM"?

  • The JVM architecture in use can be retrieved using the "os.arch" property:

    System.getProperty("os.arch");
    

    The "os" part seems to be a bit of a misnomer, or perhaps the original designers did not expect JVMs to be running on architectures they weren't written for. Return values seem to be inconsistent.

    The NetBeans Installer team are tackling the issue of JVM vs OS architecture. Quote:

    x64 bit : Java and System

    Tracked as the Issue 143434.

    Currently we using x64 bit of JVM to determine if system (and thus Platform.getHardwareArch()) is 64-bit or not. This is definitely wrong since it is possible to run 32bit JVM on 64bit system. We should find a solution to check OS real 64-bitness in case of running on 32-bit JVM.

    • for Windows it can be done using WindowsRegistry.IsWow64Process()
    • for Linux - by checking 'uname -m/-p' == x86_64
    • for Solaris it can be done using e.g. 'isainfo -b'
    • for Mac O SX it cant be done using uname arguments, probably it can be solved by creating of 64-bit binary and executing on the platform... (unfortunately, this does not work:( Ive created binary only with x86_64 and ppc64 arch and it was successfully executed on Tiger..)
    • for Generic Unix support - it is not clear as well... likely checking for the same 'uname -m/-p' / 'getconf LONG_BIT' and comparing it with some possible 64-bit values (x86_64, x64, amd64, ia64).


    Sample properties from different JVMs all running on 64bit Ubuntu 8.0.4:

    32bit IBM 1.5:

    java.vendor=IBM Corporation
    java.vendor.url=http://www.ibm.com/
    java.version=1.5.0
    java.vm.info=J2RE 1.5.0 IBM J9 2.3 Linux x86-32 j9vmxi3223-20061001 (JIT enabled)
    J9VM - 20060915_08260_lHdSMR
    JIT  - 20060908_1811_r8
    GC   - 20060906_AA
    java.vm.name=IBM J9 VM
    java.vm.specification.name=Java Virtual Machine Specification
    java.vm.specification.vendor=Sun Microsystems Inc.
    java.vm.specification.version=1.0
    java.vm.vendor=IBM Corporation
    java.vm.version=2.3
    os.arch=x86
    os.name=Linux
    os.version=2.6.24-23-generic
    sun.arch.data.model=32
    

    64bit Sun 1.6:

    java.vendor=Sun Microsystems Inc.
    java.vendor.url=http://java.sun.com/
    java.vendor.url.bug=http://java.sun.com/cgi-bin/bugreport.cgi
    java.version=1.6.0_05
    java.vm.info=mixed mode
    java.vm.name=Java HotSpot(TM) 64-Bit Server VM
    java.vm.specification.name=Java Virtual Machine Specification
    java.vm.specification.vendor=Sun Microsystems Inc.
    java.vm.specification.version=1.0
    java.vm.vendor=Sun Microsystems Inc.
    java.vm.version=10.0-b19
    os.arch=amd64
    os.name=Linux
    os.version=2.6.24-23-generic
    sun.arch.data.model=64
    

    64bit GNU 1.5:

    java.vendor=Free Software Foundation, Inc.
    java.vendor.url=http://gcc.gnu.org/java/
    java.version=1.5.0
    java.vm.info=GNU libgcj 4.2.4 (Ubuntu 4.2.4-1ubuntu3)
    java.vm.name=GNU libgcj
    java.vm.specification.name=Java(tm) Virtual Machine Specification
    java.vm.specification.vendor=Sun Microsystems Inc.
    java.vm.specification.version=1.0
    java.vm.vendor=Free Software Foundation, Inc.
    java.vm.version=4.2.4 (Ubuntu 4.2.4-1ubuntu3)
    os.arch=x86_64
    os.name=Linux
    os.version=2.6.24-23-generic
    

    (The GNU version does not report the "sun.arch.data.model" property; presumably other JVMs don't either.)

  • There might be both 32 bit and 64 bit JVM's available on the system, and plenty of them.

    If you already have dll's for each supported platform - consider making a small executable which links and run so you can test if the platform supports a given functionality. If the executable links and run, you can install the corresponding shared libraries.

  • The following code checks the machineType field in the java.exe (effectively the equivalent of using uname):

    public class ExeDetect
    {
      public static void main(String[] args) throws Exception {
        File x64 = new File("C:/Program Files/Java/jre1.6.0_04/bin/java.exe");
        File x86 = new File("C:/Program Files (x86)/Java/jre1.6.0/bin/java.exe");
        System.out.println(is64Bit(x64));
        System.out.println(is64Bit(x86));
      }
    
      public static boolean is64Bit(File exe) throws IOException {
        InputStream is = new FileInputStream(exe);
        int magic = is.read() | is.read() << 8;
        if(magic != 0x5A4D) 
            throw new IOException("Invalid Exe");
        for(int i = 0; i < 58; i++) is.read(); // skip until pe offset
        int address = is.read() | is.read() << 8 | 
             is.read() << 16 | is.read() << 24;
        for(int i = 0; i < address - 60; i++) is.read(); // skip until pe header+4
        int machineType = is.read() | is.read() << 8;
        return machineType == 0x8664;
      }
    }
    

    Note that the code has been compacted for brevity...

    Gregory Pakosz : note that the code breaks as soon as the user doesn't use the very same jre version...
  • import sun.misc.*;
    
    import java.lang.reflect.*;
    
    public class UnsafeTest {
      public static void main(String[] args) throws NoSuchFieldException, IllegalAccessException {
        Field unsafeField = Unsafe.class.getDeclaredField("theUnsafe");
        unsafeField.setAccessible(true);
        Unsafe unsafe = (Unsafe) unsafeField.get(null);
        System.out.println(unsafe.addressSize());
      }
    }
    

Extreme wait-time when taking a SQL Server database offline

I'm trying to perform some offline maintenance (dev database restore from live backup) on my dev database, but the 'Take Offline' command via SQL Server Management Studio is performing extremely slowly - on the order of 30 minutes plus now. I am just about at my wits end and I can't seem to find any references online as to what might be causing the speed problem, or how to fix it.

Some sites have suggested that open connections to the database cause this slowdown, but the only application that uses this database is my dev machine's IIS instance, and the service is stopped - there are no more open connections.

What could be causing this slowdown, and what can I do to speed it up?

From stackoverflow
  • do you have open SQL Server management studio windows that are connected to this DB?

    put it in single user mode, and then try again.

    u07ch : ALTER DATABASE SET SINGLE_USER WITH Rollback Immediate
    Erik Forbes : KMike - the only connection I have is open to the Master database, not the database I'm trying to take offline.
  • There is most likely a conenction to the DB from somewhere (a rare example: asynchronous statistic update)

    To find connections, use sys.sysprocesses

    USE master
    SELECT * FROM sys.sysprocesses WHERE dbid = DB_ID('MyDB')
    

    To force disconnections, use ROLLBACK IMMEDIATE

    USE master
    ALTER DATABASE MyDB SET SINGLE_USER WITH
    
  • After some additional searching (new search terms inspired by gbn's answer and u07ch's comment on KMike's answer) I found this, which completed successfully in 2 seconds:

    ALTER DATABASE <dbname> SET OFFLINE WITH ROLLBACK IMMEDIATE
    

    Thanks a bunch guys - I appreciate the help. =)

  • anytime you run into this type of thing you should always think of your transaction log. The alter db statment with rollback immediate indicates this to be the case. Check this out: http://msdn.microsoft.com/en-us/library/ms189085.aspx

    Bone up on checkpoints, etc. You need to decide if the transactions in your log are worth saving or not and then pick the mode to run your db in accordingly. There's really no reason for you to have to wait but also no reason for you to lose data either - you can have both.

    Erik Forbes : Sage advice - thank you - but in this case the data is expendable as it's a development database that is being restored to.

Backup-restore the current repeat action (.) in VIM

I'd like to save the current single-repeat action (this is what happens when you press .), then run a macro, and then restore the . action (because macros usually change it).

Is this possible?

Alternatively, is it possible to write a macro without changing the current . action? I guess I could try to do everything in Command Mode, but I'm not sure if that's possible.

Background: I'm writing a file browser function (like command-T in TextMate, see http://github.com/mdemare/fuzzy_file_finder), and I want to make as few changes as possible to the user's environment, so the changes have to occur in the function. Currently, when you switch files using this function, your . action is overwritten, which is very unwelcome.

From stackoverflow
  • You can record an action into a register to be used later. Press q followed by a register (a-z, A-Z, 0-9 or " are valid register identifiers), apply the desired command/actions and the press q to stop the recording. The command can be recalled by pressing @ followed by the register.

    For more detailed instructions, see the complex repeat section of the Vim documentation.

    NOTE: Unfortunately, the sequence qa.qu will not do exactly what you want since the . command will repeat the current last action and not the last action at the time the command was recorded.

    Michiel de Mare : I know that, that's not what I meant at all. That's what I call a macro, and it changes the . action.
    Judge Maygarden : Well, then I supposed you answer is no...
  • The only way I can think of to help you out: Remap '.' to save a history of actions, which you could then recall if needed. For ideas on these lines, see the repeat.vim plugin.

    Judge Maygarden : How does one retrieve the last action by remapping '.'? How is the last action stored/modified?
    Caleb Huitt - cjhuitt : @monjardin: I don't really know those answers, but I had come across the vim plugin that looked like it might be doing those things, so I mentioned it as a possible inspiration.
  • In VIM you can create a macro that will execute any edits you would typically do in normal mode without disturbing the redo [.] functionality by wrapping those edits in a user defined :function and then executing that function with a :mapped key.

    Example

    The best way to see it is with an example. Suppose you want to add the text yyy to the end of the current line every time you hit the F2 key, but you don't want this to interfere with the redo command [.].

    Here's how to do it:

    1. Open a new vim window and execute the following commands:

      :fu JL()
          normal Ayyy
          endfu
      :map <F2> :call JL()<Enter>
      
    2. Now add some text, let's say xxx, by typing Axxx<Esc>

    3. Now press the [F2] key and you should see xxxyyy

    4. Finally, press the [.] key and you should see xxxyyyxxx

    Just what you wanted!

    Why this works

    This works because of the nature of the way VIM executes the redo command. VIM keeps track of the characters of a command as you type it. When you press the [.] key, it stuffs those characters back into the keyboard buffer to re-execute them. Unfortunately, a simple q macro works the same way -- it stuffs characters into the buffer and by doing so overwrites the redo buffer. The :normal command does this as well, but when placed inside a user defined function we get around this limitation because the code that executes the user defined function saves and restores the redo buffer during the user defined function.

    This all happens in eval.c and getchar.c in the VIM source code. Search for saveRedobuff to see what's going on.

    Michiel de Mare : Excellent! Enjoy the karma!

XML schema for ASPX?

I'm editing a lot of .aspx files in Emacs these days. nxml-mode can use a schema (RELAX NG, but maybe others) to detect errors in XML, and I find it really handy for other things.

Is there a RELAX NG schema for *.aspx files?

(I'm having trouble searching because "aspx" is so common not just as a language but as part of URLs.)

From stackoverflow
  • I don't think ASPX is an XML format. I think <%@ is not valid XML, for example.

creating dynamic controls using ajax on asp.net

I have button Add and Remove on my page. Add button adds one checkbox, two textboxes and one dropdownlist to a new line on my page. Remove button removes them. I have this running nicely by following Joe Stagner's example.

Problem: The controls that are created dynamically all need to fire the same event when checked (for checkboxes), also for selected index changes (for dropdownlists).

I have tried to add event handler when I create an object but it doesn't seem to fire?

From stackoverflow
  • I think you're probably running into the fact that your page, upon each page post, is being completely recreated - essentially the page has to duplicate what controls were on your page before it can attempt to feed postback (and events) to them. I think what you probably need to do is add code to your page_load which will re-create the dynamically created controls, with the same ids as they had, and register the event handler.

  • Sounds like you have a page life cycle issue.

    For a dynamically created controls to fire events you should create them in the PreInit event of the page.

    Here's a link to a cheat sheet for Asp.net page life cycle.

  • you need to persist the dynamically created controls in some way [session, viewstate, etc.] for each page load. Recreate the dynamic controls and re-bind the events using delegates on each page load in preInit function.

  • yeah, like what all said, it is Life cycle issue. when you load user controls dynamically you should always do the following.

    • Assign a unique ID for each User Control.
    • Reload the user controls on Page_Load or Page_Init Events.

    and to make it all easier i suggest to abstract the loading to a function that you will call from Page_Load and Page_Init as mentioned before, this function will check if hte target user control was loaded and will load it again for you, to do that, you store the loaded user controls IDs in Session or viewstate.

    hope this helps.

    Ivan : ID is given to each control and all contorls are reloaded on OnInit, it just doesn't fire
    : can you show us some code bro ?
    Ivan : there is a link to the Joe Stagner's example code (both c# and vb) in my question. It is the same concept, so if you could show me, for example, what would I need to do to have the same event, lets say onTextChange of the textbox control firing for any given dynamically created textbox

get all of the immediate subdirectories in ruby

I'm trying to write a simple ruby script that will copy a index.tpl to index.html in all of the subdirectories (with a few exceptions). But I'm getting bogged down by trying to get the list of subdirectories

From stackoverflow
  • Dir.glob("**/") will return an array of all paths underneath the current directory. From there you can filter the list and copy a file with File.copy(from, to)

  • If you mean to find all the immediate subdirectories (just one level below where you are), try this:

    Dir.chdir("/some/path/you/want/to/check/below")
    subdir_list=Dir["*"].reject{|o| not File.directory?(o)}
    

    That is: change directory someplace, construct an array of files found there, reject those array elements that aren't directories, and return the resulting culled arrray.

  • Assuming you only wanted the immediate subdirectories, you could use Dir['*/'] (which combines Micheal Sepcot's and glenra's answers).

    Andrew Bullock : whats to "assume"? thats what he asked in the question! +1

Slow performance from MapServer

I'm using mapserver to create a map that will be displayed with the google map api. I'm encountering performances issues.

My maps are all in shapefile format.

I run tests to get time to render maps.

When rendering a map with the shp2img tool, using command line

shp2img -i gif -m C:\myfolder\mymapfile.map -o C:\myfolder\test.gif -all_debug 5 -map_debug 5

I get the following metrics from the log files:

[Thu Apr 30 13:50:19 2009].148000 msLoadMap(): 0.000s
[Thu Apr 30 13:50:19 2009].180000 msDrawMap(): Layer 0 (PWorld2), 0.032s
[Thu Apr 30 13:50:19 2009].180000 msDrawMap(): Drawing Label Cache, 0.000s
[Thu Apr 30 13:50:19 2009].180000 msDrawMap() total time: 0.032s
[Thu Apr 30 13:50:19 2009].195000 msSaveImage() total time: 0.015s
[Thu Apr 30 13:50:19 2009].195000 msFreeMap(): freeing map at 01595E18.
[Thu Apr 30 13:50:19 2009].195000 freeLayer(): freeing layer at 0159CD00.
[Thu Apr 30 13:50:19 2009].195000 shp2img total time: 0.047s

When rendering the same map through mapserver, using

http://localhost/cgi-bin/mapserv.exe?mymapfile.map&layers=&mode=tile&tilemode=gmap&tile=1+1+2

log file is giving this:

[Thu Apr 30 13:51:50 2009].664000 CGI Request 1 on process 3520
[Thu Apr 30 13:51:50 2009].664000 msTileSetExtent (-10013744.792915, 8348.961808) (-5009.377085, 10010405.208192)
[Thu Apr 30 13:51:51 2009].23000 msDrawMap(): Layer 0 (PWorld2), 0.359s
[Thu Apr 30 13:51:51 2009].23000 msDrawMap(): Drawing Label Cache, 0.000s
[Thu Apr 30 13:51:51 2009].23000 msDrawMap() total time: 0.359s
[Thu Apr 30 13:51:51 2009].23000 msSaveImage() total time: 0.000s
[Thu Apr 30 13:51:51 2009].23000 mapserv request processing time (loadmap not incl.): 0.359s
[Thu Apr 30 13:51:51 2009].23000 msFreeMap(): freeing map at 01598690.

For the same map, the shp2img tool is rendering map 10 times faster than mapserver. When adding more layers and using the tiling mode for google map, can be up to 10 seconds.

Is somebody knows why mapserver is rendering this slow? Is there a workaround?

From stackoverflow
  • I have a couple of suggestions but no hard answers, I haven't done much mapserver config but I've worked with people who have.

    1. There are a lot of optimizations you can do to mapserver, I'd check the mailing list.
    2. Make the mapfile as small as possible, as opening and parsing the mapfile can be time consuming for mapserver.
    3. Create all the tiles ahead of time and just use mapserver get the files. Tiling on the fly is not very fast.
    Blue : I already have made a lot of optimization in mapfile. It looks like optimization make rendering faster with shp2img, but little improvements in mapserver. I was expecting mapserver to be roughly as fast as shp2img tool. I don't understand why shp2img is faster.
  • There are a couple of differences between the shp2img and the mapserv request:

    1) shp2img creates a single image, your mapserv request generates tiles. This means that it might have to render 9 tiles for the samen bounding box instead of 1. This generates overhead. Try rendering without the tiles option set and run your test again.

    2) You have a small overhead for the cgi request in mapserv 3) Mapserv is pushing the image over http while shp2img is writing directly to disk. 4) You did not specify the layer in the mapserv request which means that mapserv goes looking for layers.

    Donny V. : 3) Make sure and use FastCGI, it makes a huge difference. http://www.slideshare.net/DonnyV/wms-performance-tests-map-server-vs-geo-server

documenting existing code

I just joined a Heroic shop. The code appears to be clean and quality, but there is no documentation to speak of. (the code was written and maintained by a small group). Thus, new engineers need to reengineer from source to determine design. As the project is transitioning from a small (3) team to a larger team, I would like to document my efforts to learn the applications so the next hire can learn them more quickly.

A quick search of "document existing" + application | code doesn't yield much. I am sure this is a common question, so reference to another discussion might be the best.

Any suggestions?

From stackoverflow
  • What language? Depending on the language, something like doxygen could be very helpful.

  • Another easy starting point is a wiki. Documenting or diagramming the high-level structure of the system/application so everyone has a decent starting point should be a priority, and a wiki is a convenient way to get the info out there. (I'd be surprised if there was no documentation at this level from the original 3, but I bet they could whomp up something useful very quickly...)

  • VSdocman (http://www.helixoft.com) is excellent tool for generate documentation for C# and VB.NET. They also has tool for VB6. You can download 14 day trial version. Price started from $229 for .NET version and $79 for VB6.

  • Consider doing some of your documentation in the form of automated tests. Then you'll (a) verify that the code really does work the way you understand it to, and (b) build up a suite of regression tests to let you know if you break something later (e.g. due to not realizing that changing X will cause a side effect in Y -- very possible even in code you are familiar with).

    If you don't know how to get started with adding tests to existing code, pick up Michael Feathers' excellent book "Working Effectively with Legacy Code".

    As for (easily) human-readable and -skimmable documentation, Fit and FitNesse can produce human-readable, but executable, tests, especially when the requirements can be represented in a grid (these inputs -> these outputs).

    cyborg_ar : +1 test cases often are more valuable than heavy manual pages
  • Backing up what Jeff K. said, I'd go with a Wiki, with privs spread as wide as possible (ideally to your users too). Once people know it is up, putting up a stub for each program will often be enough to catalyze several iterations of updates, culminating in rather complete documentation suprisingly quickly. Engineers may hate documenting things, but we can't resist fixing something we see wrong.

    Worst case, you end up revision-controlled documents which nobody but you ever edits. That's no worse than you have now.

  • Depending on how complex the code is, I've done a few things.

    You probably want to go over each method/fucntion/object/whatever (you didn't mention a language) and try to understand what it is doing. If you have ANYTHING that takes you more than a minute to understand, figure out what you didn't understand and write a comment so the next time it won't take you that minute.

    Understanding how all the parts relate to each other can be tough unless the design is very well done. Printing debug output at the entry/exit of each routine and using stack dumps can be helpful to see how you got some place. A debugger can be awesome for figuring this stuff out.

    A final tool I found to be really useful is a profiler. I had a free profiler plug-in for Eclipse (I forget the name, but I don't think there are many) that would create an awesome sequence diagram for any code it went through as it was running. This may be the best tool I ever saw for understanding what the code was doing. It was a bit difficult to set up at the time, but keep at it, it's doable and well worth it.

    I turned on the profiler, hit one button/executed one task, then saved the "Run" for that button.

    I filtered out classes that were trivial and got it to a semi-reasonable size (Some were 2 pages, one sequence diagram took 4x6 sheets of paper to print (landscape)). I taped them all together and put them on my cube wall and studied/documented the hell out of that thing.

    Sequence diagrams rock when done right, by the way. If you are trying to understand some code and you don't use sequence diagrams, look into them. I think they are probably the most useful design documentation tool I've seen.

  • If there are few or insufficient comments in the code, Doxygen will have limited value. However, it can still be used to give you some idea of the code structure and dependencies. (Things like a profiler are great for understanding the behavior.) I find pictures helpful for understanding dependencies and usually run the code through a UML tool to reverse engineer the design. You can get similar, class diagrams by using Graphviz, which can be integrated with Doxygen pretty easily (even for non-object oriented code).

ReportBuilder 7.x - Controlling Print to File at Print Time

Using ReportBuilder 7.X

Question

Is it possible to Control Print to File.

I need to change the Length of a field at print time

Example:

label2

In the setup - I set its length to 800 which is the max possible this field should ever be. However, in many cases the record is less than that and i need to set it to the calculated size before printing to file.

Is this possible?

Is it possible to control any portions of this Print to file...at print time (before Print, after print)? Are the objects avaiable?

We are registered users of the 10.x and above i believe, but have still not gotten around to recompiling are application in Delphi 2009 and the new ReportBuilder....so, that is not an option at this point.

Thanks

Shane

From stackoverflow
  • You can try to use the OnDataChange event of the tDataSource that you are using to link your data to your report. This event fires when the current record in the associated dataset changes. In that event, adjust your label to the size for the current record.

  • i solved this! Each control has a saveLength property. I can just use a global variable that can change at any time (controlling length of entire record). Then just before i print he label, i can set its saveLength property

    thanks to all who responded though

How to determine if a field is set to not null?

I have an existing program deployed where some customer databases have a field set to not null, while others are nullable. I need to run a patch to correct the database so that the column is nullable but do not need to run it against all databases, just ones where it is incorrect. Is there a simple method that can be used in SQL Server to perform this check? Preferably something that can be run as part of a SQL script.

From stackoverflow
  • select Table_Name, Column_Name, Is_Nullable
    from information_schema.columns
    

    Will get you that info

    Joshua Belden : Can you comment on why information_schema.columns is better than syscolumns. I honestly don't know, not being antagonistic.
    cmsjr : Absolutely, I would say, where possible to use them, the schema views are preferable because 1. They are less susceptible to change than the system tables (e.g. sys.objects vs sysobjects) and 2. They tend to aggregate system table info helpfully (e.g. foreign key references)
    Joshua Belden : Perfect, thank you.
    Tom H. : Another important point is that they are in the SQL-92 ANSI standard. The sys.objects, etc. tables are Microsoft specific.
    cmsjr : I was unaware of that, thanks for the info.
  • select isnullable from syscolumns where name = 'status'
    
    Mitchel Sellers : This query needs to have extra where clause items as if there are multiple tables with status columns you will get multiple results.
  • Look into the INFORMATION_SCHEMA views. For example:

    SELECT
         IS_NULLABLE
    FROM
         My_DB.INFORMATION_SCHEMA.COLUMNS
    WHERE
         TABLE_SCHEMA = 'dbo' AND
         TABLE_NAME = 'My_Table' AND
         COLUMN_NAME = 'My_Column'
    

    IS_NULLABLE will be either "YES" or "NO".