Saturday, January 29, 2011

Infiniband QDR to DDR

I have a bunch of computer with onboard Mellanox ConnectX-2 QDR Infiniband 40Gbps Controllers. They have QSFP connectors.

I have a switch with 24 4x CX4 DDR connectors.

If I buy QSFP to CX4 cables, will the QDR controllers on the computers be able to downscale to DDR, and be able to communicate with (through) the switch?

  • Infiniband should be smart enough to negotiate a compatible data rate; in your case, since your switch is a DDR switch, your IB network will run at DDR rates.

    Realn0whereman : Well, I have alot banking on this, so I'd like more than a "should". But thank you for your answer. I'll try to give mellanox a call, and leave this thread in case anyone else can answer.
    Realn0whereman : Update: I called mellanox and they verified that QDR can downscale to DDR. Success :D now all I need are QSFP to CX4 cables.
    From ryanlim

Virtual Subdomains

I would like to manage subdomains exclusively through .htaccess.

I am able to catch subdomains that I set up - for example, support.testsite.com - currently, I redirect that to testsite.com/support.

What I would like to do is retain the subdomain, even after the redirect - so support.testsite.com seems to stay in the address bar for the user, even though it is actually located at testsite.com/support.

Should I maintain another .htaccess file in /support that rewrites the address?

Thanks!

Should we backup SQL 2000 with ArcServe SQL agent or backup to file and then backup the file?

Hi all

We have SQL Server 2000 running on Windows Server 2003. We've recently moved to new hardware. We use ArcServe / Brightstor for backups. On the old server we used the SQL agent. We also have agents for file system backups.

We only have one server. SQL Server is set to simple recovery model. The old system was doing weekly full ("Database") backups and daily Differentials to tape via the SQL agent.

This might be a silly question but I'm wondering what we really gain by using the SQL agent other than added complication and memory use.

We have plenty of disk space. Is it a reasonable strategy to backup to disk using the standard SQL Server backup functionality and then simply backup those files using the file system agent? One advantage I see of doing that is that it would give us precise control of the time when the SQL backup occurs. With ArcServe doing it, it happens in the queue which includes other servers on our LAN.

We soon plan to have a second server and implement log shipping.

Thanks

  • As far as I'm concerned it's a case of "six of one and a half dozen of the other". There's no right or wrong way so do it using whichever method you're more comfortable with in terms of timing, resource usage, reliability of the backup, etc. I often use a combination of the two methods depending on the server and backup strategies in use.

    squillman : +1 I'd also throw consideration for features that 3rd party backup software would provide that native backup doesn't such as compression, encryption, object-level backup, etc.
    joeqwerty : @squill: Good points.
    tetranz : Thanks. It sounds like it's worth using the SQL agent.
    joeqwerty : Glad to help...
    From joeqwerty
  • Given that you have a pretty simple setup, you're not really gaining much with 3rd party backups over the built-in osql backup. @squillman does make a good point about things like compression and encryption. We have used ArcServe and Veritas in the past but now just use osql backup and GnuPG in our backup script to accomplish the same thing and have had good luck with it. We do miss out on object-level restore this way but can live without it.

    It may be worth it to you to pay for the simplicity of having these and other features in the software, but I tend to look at from the opposite standpoint -- 3rd party software makes creating backups more intuitive, but can add complexity as well as another dependency when it comes to restoring/recovery.

    From nedm

How to remove second ip address from vlan interface on switch

I am using ssh to remotely connect to a Dell PowerConnect switch (Dell uses the same commands as Cisco). I need to change the ip address of vlan 1 from 192.x.x.x to 10.x.x.x. I am able to add the new address of 10.x.x.x by:console(config-if)# ip address 10.x.x.x 255.x.x.x. Now vlan 1 will reply from both 10.x.x.x and 192.x.x.x

How do I then remove the existing 192.x.x.x from that interface?

  • It's been a while since I've worked on Cisco gear, so this may be incorrect, but give this a try:

    > no ip address 192.x.x.x 255.255.255.0
    
    Vatine : That looks correct to me and it's only been two days since I last removed an IP address from a Cisco device.
    pizzim13 : So, it appears not all the commands are the same. 'no ip address 192.x.x.x 255.255.255.0' returns "Wrong number of parameters or invalid range, size or characters entered." And 'no ip address 192.x.x.x' removes both ips
    ErikA : Then try `no ip address 192.x.x.x`.
    From ErikA

SQL Server Management Studio licensing

We have a SQL Server 2008 server, licensed per processor.

If I want to install SQL Server Management Studio (SSMS) on a couple of desktop PCs (in order to connect to the server), do I need more licenses?

I can't find much about this in the SQL licensing FAQs.

  • No. You can install as many instances of SSMS as you want (it doesn't require a license)...provided you have the CALs to access SQL server.

    Having said that, I am not a Microsoft Licensing Scheme(tm) expert...

    Edited to add:

    Q. Do I need a separate license to run the SQL Server tools and analysis services?

    A. No, a separate license is not required. However, any device that has SQL Server tools or technologies installed must have a valid SQL Server license.

    From here

    codeulike : And with processor licensing CALs are not an issue, presumably?
    codeulike : If you can find a source for 'SSMS doesn't require a license' that would be handy, too.
    GregD : Edited my answer above.
    GregD : And to answer your question about the "processor" license. That particular license covers all devices that access THAT processor.
    codeulike : Thanks for the edit, but that bit of the Licensing FAQ is about as clear as mud to me. 'Any device that has SQL server tools ... must have a valid SQL Server license'. I think you are saying we should interpret that as "CAL is sufficient"?
    GregD : As long as you have the processor license, you're covered with any device that you install SSMS on, because the processor license is essentially an "unlimited" device CAL license for devices accessing that processor...
    codeulike : Right, I get it now! thanks a lot : )
    From GregD
  • There's no license requirement for the SQL Tools beyond having the needed CALs (or a CPU license for the servers) so that you can connect without issue.

    Now any computer which has the server components on it (SQL Engine, SSAS, SSRS, etc) will need to be fully licensed. So if you have one CPU license for one server running the database engine with 1 physical CPU and a second server with 1 physical CPU running SSAS then you need to buy another CPU license for the SSAS server.

    From mrdenny

Enable/Disable Microsoft CRM 3.0 workflow logging?

Is there a way to enable or disable workflow logging from Microsoft CRM 3.0? Our DB is getting pounded, and it looks like logging is the culprit. If not, is there a way to manage the volume of logging, like log4j levels?

  • I think this one should help you out: http://support.microsoft.com/kb/907490 . Make sure tracing is disabled, this should bring down the pounding.

    Robot : I saw that and wondered if it's related to what we're experiencing... that doc refers to trace files as opposed to WF log tables in SQL Server, but it's our DB that's under pressure.

Good Shared Hosting or VPS to Host DE Domain

One of my client has a .de domain. I have VPS from Linode and I am pretty happy with them. Domain is registered with godaddy. DE domains Name servers have some restrictions. I first try to move name server to Linode. I set TTL values accordingly but couldn't succeed with Linode. Then try to configure name server of godaddy with adding A and CNAME record to point my Linode VPS with no success. Now I have one option I look forward good VPS or shared hosting that have a name server works with de domains. I found a shared hosting but they dont have php-curl installed. If you ever hosted a de domain please share your experience.

  • If you purchased the .DE domain from GoDaddy, you should be able to use their nameservers. I believe the restrictions you speak of are for actually registering the domains, not where the nameservers are located. What issues were you encountering with GoDaddy? Was it throwing an error when you tried to add the records? If so, you should probably speak to their support.

    Gok Demir : Itried to use godaddy but it fails http://www.denic.de/en/background/nast.html
    vmfarms : That link you sent seems to be checking for correct DNS settings, only after you've configured them. Have you configured it on GoDaddy's end yet? Did it give you any issues or error messages within GoDaddy?
    From vmfarms

APC - SmartUPS RT 5000 Configuration

Hi,

I hope this is an easy one for someone!

We have a SmartUPS 5000 in our rack. At the moment, it looks like it's just been configured by our old IT support company to email them when it kicks in (not too useful).

I'd like to get it to shut down the servers we have here automatically on power interruption. I have a personal UPS at home, and I just install the software on the PC and after a few clicks, job done.

I understand this one works over the LAN. I can get access in to the web console however I can't work out what I need to do on the servers to get them to shut down once the UPS sends the command. I can't see any CD's about although I have registered on the APC site and have got some downloads. They all seem to want to look for serial or USB devices though.

Can anyone let me know what package I need to install to allow me to get this to work?

Many thanks!

  • The product you need is called Powerchute Network Shutdown http://www.apc.com/products/family/index.cfm?id=127

    X100 : I've just installed this thanks to your post. Looks like it should do the job. I take it this will act on the settings specified on the UPS device it's self (under UPS > Shutdown). I'll schedule in a planned test and see if they shutdown.
  • I think you'll need some kind of master server that is issuing the shutdown command to the servers - I don't think you'll find this functionality on the UPS itself.

    What package to install to do this will depend on what environment you have. PowerChute Network Shutdown looks like a likely candidate, but if you have some kind of network management system in place (e.g. Nagios based) you might be able to use that to issue shutdown when it detects that the UPS has kicked in.

    X100 : If you see c10k's post, I've installed it and will do a test run. The UPS has shutdown parameters on it, how long to run on battery etc. so I'm thinking this might be what the software picks up. The installation asks for the management card IP, so it's talking to the UPS.
    dunxd : Yeah - c10k just pipped me to it. Go with that one.
    From dunxd

FTP -> VSFTPD -> allow uploading folders

I have a Fedora FTP server that uses VSFTPD. I wanted to know how I could allow the users to upload directories and make directories.

Thanks in advance.

  • # Uncomment this to allow local users to log in.
    local_enable=YES
    
    # Uncomment this to enable any form of FTP write command.
    write_enable=YES
    
    From Teddy

How to replace server name from received tag of email header

The headers of my outbound emails show the name of my server in the "Received:" tag and I want to replace this with my server's IP or domain name.

I found one link but it is of MS-Exchange. http://www.tek-tips.com/viewthread.cfm?qid=1608026&page=1

I want to do this for SMTP. I am using Windows Server 2008.

  • Hello,

    You resolved the problem or not yet

    Please help,

    Thanks, Salaam

    From

Telecom provider not supporting MTU higher than 1515 for leased line

We have ordered a 100 Mbps leased line and did not specify our requirement to have a minimum of 1530 MTU supported. The telco now sent us a feedback that they do not support 1530, but only 1515.

Did anyone experience something similar? And what can be the rational behind for the Telecom provider not supporting higher MTU. What kind of hardware is deciding this in the case of a leased line service?


Edit: line protocol is Ethernet; also I corrected: it is a 100 Mbps we ordered. Hope it clarifies. Thanks for your help.

  • In all likelihood, one or more segments of the leased line are tunnelled through some other technology (e.g. IP, ATM, Frame Relay). Adding up all the different networking layers tends to create this sort of problem. I've experienced this directly, and companies generally discover this "minor detail" from the telco only after signing multi-year contracts that they can't pull out of. Changing every single device in your network to support a smaller-than-standard MTU is a nightmare, and should be avoided if at all possible.

Hidden nginx rewrite rule (I think).

About a week ago, I was playing with nginx rewrite stuff to rewrite /admin to https.

I now want to undo this, but I cannot for the life of me, remember where I put that rewrite rule.

I've reloaded, restarted, stopped and started nginx. I've rebooted the server. I've restored nginx.conf to the default version.

I have no idea where I put that rule. It's either there, or nginx is just confused, because when I go to [domain]/admin, it redirects to https://[domain]/admin

I might end up purging nginx from the system and installing from scratch.

Is there anywhere else that a rewrite might be put? Any suggestions?

Thanks.

  • Perhaps you could provide your actual configuration file? You want to look for the include directive as that's the only way any directive can be "hidden".

    Of course, far more likely is that your browser is caching and you've actually already removed the rewrite. Try to test the URL with curl and see if a location header is present, if not then it's browser caching.

    Dane Larsen : Ahh. It was the cache. Thanks.
    From Martin F

Reverse engineer diskimage path from Volume name?

I have a sparse-image diskimage that is mounted on my system but I can't find the original file.

Is there a way to reverse engineer the location of the diskimage on file from the mount point (e.g /Volume/my-sparse-image) using command line tools?

I've tried diskutil and mount with no luck.

  • One option is to open Disk Utility as it should show the Disk Image on the left hand side and the volume it mounted below it.

    Alternatively if you run hdiutil info it will show the image path and mount point (along with other information) of all the disk images mounted on your computer.

    clscott : I really want to use the command line because I'm working on a utility script for my coworkers. `hdiutil info` without any other arguments is answer I needed.
    From Chealion

IIS 7.0 Website Fails Regularly After About 30 Minutes

I have a website running under IIS 7.0 on Windows Server 2008. It's just being used by 2-3 people at any point in time under very light load.

It runs fine for about 30 minutes, but then fails with the error:

Server Error in '/' Application.

Dynamic view compilation failed. c:\Windows\Microsoft.NET\Framework64\v4.0.30319\Temporary ASP.NET Files\root\61a09567\0ee17e160a294837a9b42f8e66a8d2c9-1.cs(6,7): error CS0246: The type or namespace name 'MvcReCaptcha' could not be found (are you missing a using directive or an assembly reference?)

MvcReCaptcha.dll is present in the bin directory, and is certainly used by the application while it's running (functionality provided by that DLL is referenced).

The application can be reliably restarted by:

  1. Stopping that site
  2. Deleting c:\Windows\Microsoft.NET\Framework64\v4.0.30319\Temporary ASP.NET Files\root\61a09567
  3. Restarting that site

The Application Pool is set to recycle every 1740 minutes (no other conditions).

Thoughts on what might be causing the crash?

  • Place the dll in the GAC, this should cause this issue to stop recurring.

    Eric J. : Thanks for the tip. Still, I would prefer not to have to GAC the DLL and would really like to understand the root cause.
    Woot4Moo : Perhaps it is that IIS is misinformed as to where this resides.
    Eric J. : @Woot4Moo: How could that happen? IIS can find the DLL initially, just seems to lose track of it at some point. What might I have done setting up the site that could cause that behavior?
    From Woot4Moo
  • It turns out this is a known issue with the Spark view engine.

    http://stackoverflow.com/questions/1805779/using-asp-net-mvc-2-features-with-the-spark-view-engine

    From Eric J.

How to set _optimizer_search_limit and _optimizer_max_permutations in Oracle10g.

I am working on a product that must support both MSSQL and Oracle (10g and 11g). I have some very complex queries that seem to run without issue on MSSQL 2005/2008, but very, very slow with Oracle. The CPU on the oracle server skyrockets for long periods of time, and it seems like the optimizer may be trying to find the best execution plan for the very complex query. I did some Googling to figure out how to limit the amount of time the optimizer spends on this, and came up with _optimizer_search_limit and _optimizer_max_permutations. Both of these parameters are hidden in Oracle 10g, and setting them in init.ora doesn't seem to make any difference.

How do I set these parameters in Oracle.

Or am I just totally barking up the wrong tree with the assumption that the optimizer is spending several minutes finding an execution plan?

Thanks.

  • Never heard of a query taking 15 minutes to optimize.

    First, I'd be checking the alert log. There may be some block corruption on a system table that is causing an issue.

    Second, do you have stats gathered for all the tables being queried. 10g introduced dynamic sampling, by which the database will go look at the table for some stats if there are none in the data dictionary. If you have lots of very large tables, then gathering stats dynamically could be slow.

    You can see if you've got (recent) stats with this

    select table_name, last_analyzed from user_tables
    

    If you are dealing with complex queries on large tables, then you should take some time to plan a stats gathering strategy.

    From Gary

CentOS Server won't reboot when issuing reboot command

CentOS 5.x

Hi all,

For some reason, my CentOS server didn't want to reboot after I issued the reboot and shutdown -r now commands. All I saw in the /var/log/messages logs were:

Aug 25 13:34:32 voltage-out shutdown[1784]: shutting down for system reboot

Aug 25 13:34:33 voltage-out init: Switching to runlevel: 6

What would cause this? A hung process? How can I best troubleshoot this if it comes up in the future?

-M

  • You should perform a ps aux to see if any of the shutdown scripts are hung waiting for a process to finish. It should look something like this:

    /etc/rc6.d/K##procname
    

    You can try manually issuing a kill command for that hung script. Strange though, since there's a timeout set on the scripts where it will force a -KILL signal to any leftover process.

    Also, what's the uptime of the server/box? I've experienced an issue in the past where a box that has an uptime of over a year refuses to shut down. In that case, I've killed each process manually, run sync several times to flush all data to disk and forced a reboot (power cycle).

    Mikey B : @vmfarms thanks for the tip. That's exactly what I wanted to know. Uptime was over 300+ days
    From vmfarms

making the share read only

Hello all, I 'm sorry to come over again,

I was thinking of making the share read only from celerra manager( CIFS only) by R clicking on file system.

"the NAS share (\nas12\termemployee$) to read-only to lock it from any further updates"

But , i see somebody has already mapped this network drive and then from security tab has applied the Domain Admins and has given permission to( Allow) "read and execute ,list folder contents , read" .

This has grayed out apply tab for more than an hour now.Is this the right way to do ? Please advice , since i am worried if this is going to cause any problem.

Let me know if any elaboration is required.

  • It sounds like you're not part of the Domain Admins group; and since they're the only group with an ACE, you will not be able to change the permissions. Either that or you haven't made any changes (the Apply button doesn't work if you haven't made any changes; depending on what version of Windows you're in).

    wildchild : it worked for me now. i have allowed "read and execute ,list folder contents , read" only. Now my question is when the backup team will take a backup, would this be enough to make sure to change this to read-only to lock it from any further updates . I donot 've to do any changes from celerra point of view?
    Chris S : The owner of the object can always override the read-only ACEs; so probably not. Why isn't your backup team using the hardware snapshot provider?
    wildchild : Backup team has Legato networker and i 'm not sure if that feature is available in there.
    From Chris S

IIS reveals internal IP address in content-location field - fix

Referring: http://support.microsoft.com/kb/q218180/, there is a known issue in IIS4/5/6 whereby it will reveal the internal IP of a web server in the content-location field of the HTTP header.

We have IIS 6. I have tried the fix suggested, but it has not worked. The website is configured to send all requests to ASP.NET, and I am wondering if this is why the fix, which addresses IIS configuration, has not worked for us.

If this is the case, how would we fix this in ASP.NET?

We need to fix this issue in order to pass a security audit.

  • Where is this header coming from to begin with? According to that MSDN article (and my quick test), ASP.NET does not add a content-location header by default.

    I think you have something configured incorrectly.

    From

What to seek/avoid in a hosted server monitoring service?

I'm looking for hosted monitoring solution (CPU, memory, disk, loads, mysql, replication network etc..) for a group of servers on Amazon EC2 /Scalr (app, mysql, load balancer).

So far I found http://scoutapp.com , http://www.serverdensity.com, http://portal.monitis.com/

Do you know the pros and cons for these services. do you have experience with them? Any other similar services I should look at?

Thanks!

  • how about zabbix with http://www.mikoomi.com/ it all open source

    Niro : Thanks. I have zabbix right now. Dont like the UI
    From Rajat
  • How about Circonus:

    https://circonus.com/

    It's made by the guys over at OmniTI. I've been using it for months and it's an excellent hosted monitoring and trending solution. Prices are very reasonable too. I highly recommend checking it out.

    Niro : Just checked it out. It looks more like a pingdom competitor to monitor web pages load time than a serious server bottleneck level monitoring tool
    vmfarms : Not at all, it goes well beyond that. It can use a Nagios-like agent on your systems to report data back to their interface of any shape and size. It's quite a powerful system.
    From vmfarms

Setting up mySQL to do binary logging for incremental backups on Windows Web Server 2008 R2

I'm using mySQL 5.1 on Windows Web Server 2008 R2. I'm trying to set it up for binary logging to do incremental backups.

To start with binary logging enabled (I hope) I added

log-bin = "C:/binlogs"
binlog_do_db = mySQLdata

to my.ini . My dayabase is "mySQLdata". Is that right?

When I add some lines of data to the database I don't see any log files being written in the binlogs folder .

  • MySQL won't pick up my.ini changes without restarting it.

    Saul : This is getting worse, I tried restarting via MySqlWorkbench, and only got as far as stopping mysql, it wont restart. Rebooted, still not running. tried at the windows command prompt F:\Program Files\MySQL\MySQL Server 5.1\bin>mysqld nothing happens. Tried under control panel > services to start service , but get "could not start the service , error 1067 the process terminated unexpectedly"
    Saul : If I comment out log-bin = "C:/binlogs" it will start, the directory exists. i've tried C:\binlogs and with and without parentheses
    Craig : Look at the MySQL error log file for hints about what the problem is
    From Craig

SQL 2008 SA Password Gone with the Wind

Situation: There is a SQL 2008 instance here that we would like access to. The person who setup the instance is no longer with the company and, apparently, did not set the instance up with the proper users as admins. However, the proper users are admins on the machine that is running the SQL instance.

Some informative links I've been able to dig up on the subject are included here for reference. All have been tried and the results are mentioned below. ('h' omitted per serverfault's rule against new users posting more than one hyperlink) [fixed by edit]

http://social.msdn.microsoft.com/forums/en-US/sqlsecurity/thread/81970e88-104d-4e89-ade8-746def18108e/

http://msdn.microsoft.com/en-us/library/dd207004.aspx

http://blogs.msdn.com/b/raulga/archive/2007/07/12/disaster-recovery-what-to-do-when-the-sa-account-password-is-lost-in-sql-server-2005.aspx

When the disaster flag or single user flag is used to attempt to gain access the following error is still generated:

LOGIN FAILED FOR USER XXXX Error 18456

Any idea what the problem is with the solutions we're trying? If it matters the machine is on a totally different domain (across the world even) and attempts to login as the the service that we set to run the SQL instance (we have that password, btw) fail as well.

Thanks in advance for your time.

  • Try enabling mixed mode by changing the authentication mode from Windows Registry by modifying the LoginMode subkey and then restart sql server service in single user mode using -m parameter. Then go to this folder using command prompt C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\Binn and then try typing this SQLCMD -E and then execute these Tsql code to add windows administrators as login

    use master;
    go
    CREATE LOGIN [BUILTIN\Administrators] FROM WINDOWS WITH DEFAULT_DATABASE=[master], DEFAULT_LANGUAGE=[us_english]
    go
    

    now restart the service without -m option and change the SA password

    good Luck

    EDIT:

    You might want to take a look at this article. it has screen shot and everything

    SQL Problem : Thanks for this!
    From DaniSQL

How to change Linux services startup/boot order?

Hi,

As the question is clear from the title, how do I change Linux services startup/boot order?

  • No offence, but the answer is the first hit on Google for "Linux Service Boot Order"

    wolfgangsz : Well, it is a rather trivial question. I am surprised the OP didn't make the effort to google it first him/herself.
  • You want to read a little about your runlevels and rc.d directories. Inside the rc.d directories you can find the S and K links, like S20apache K10apache, that is basically what orders startup/shutdown of scripts.

    There are some changes being made on this architecture but most of the linuxes are still using it.

    Chris S : I'm amazed most distros still use this system; better systems like `rcorder` have been around for a while.
    coredump : I kinda envy solaris `svc`, but could do without the xml stuff
    Redmumba : This is spot on. Depending on your distro, however, you may have different ways of altering this value--so read up on the specific documentation for your distro.
    Dennis Williamson : Some distributions, such as Ubuntu, use [Upstart](http://upstart.ubuntu.com/) ( [Wikipedia](http://en.wikipedia.org/wiki/Upstart) ).
    From coredump
  • You can change the order by renaming the symlinks under /etc/rcX.d/ where x will be your run level.

    You'll see a bunch of files starting with Sxx or Kxx. S links are traced during startup while the K ones are parsed for shutdown. The xx here represents the order.

    But this order is set for a reason, so be careful while changing them.for example. ntpd should start only after the networking subsystem is initialized.

    From rags

How to load another PHP's phpmyadmin to current PHP

Hi folks,

In my application there are two PHP are available.

1) PHP 5.1.6

2) PHP 5.2.6

Currently I am using PHP 5.2.6 version in my application. Also I'm using PHP 5.2.6 database ( phpmyadmin ).

Here my question is I want to use another PHP's database ( phpmyadmin ) i.e. PHP 5.1.6 's phpmyadmin.

I dont know how to do this, Please give me ur suggestions, in which file should i make change for accessing correct database in my application.

Thanks

-Pravin

  • phpMyAdmin runs with PHP 5.1.6 as well as with PHP 5.2.6 and several other versions of PHP.

    : ohhhhh....is it so...you are saying that no need of doing this dababase redirection....only one phpMyAdmin available for all php version? Thanks
    joschi : phpMyAdmin is no database. It's merely a web-based frontend for MySQL written in PHP. So yes, you'll only need one installation of phpMyAdmin on your server even if you run more than one PHP version in parallel.
    From joschi
  • It sounds to me like you're asking if the version of phpadmin you use to manage the database affects the version you use to run your application. It does not. The version of PHP you use for your application and phpmyadmin do not have to match.

    From Redmumba

Session loss in dotProject

I'm setting up a dotProject installation, but my session is lost on every pageload and I get redirected to the login page. It seems the session variable "dotproject" is missing from every link. When I forge links manually (http://localhost/dotproject/index.php?m=ticketsmith&dotproject=....) the pages work fine.

Please advise.

EDIT:

I get the following warnings when enabling debug mode:

Warning: session_start(): Cannot send session cache limiter - headers already sent (output started at C:\Apache\htdocs\dotproject\index.php:24) in C:\Apache\htdocs\dotproject\includes\session.php on line 207 Warning: Cannot modify header information - headers already sent by (output started at C:\Apache\htdocs\dotproject\index.php:24) in C:\Apache\htdocs\dotproject\index.php on line 64 Warning: Cannot modify header information - headers already sent by (output started at C:\Apache\htdocs\dotproject\index.php:24) in C:\Apache\htdocs\dotproject\index.php on line 65 Warning: Cannot modify header information - headers already sent by (output started at C:\Apache\htdocs\dotproject\index.php:24) in C:\Apache\htdocs\dotproject\index.php on line 66 Warning: Cannot modify header information - headers already sent by (output started at C:\Apache\htdocs\dotproject\index.php:24) in C:\Apache\htdocs\dotproject\index.php on line 67 Warning: Cannot modify header information - headers already sent by (output started at C:\Apache\htdocs\dotproject\index.php:24) in C:\Apache\htdocs\dotproject\index.php on line 219

  • If you're running Firefox, you should run HttpFox, and inspect the cookies returned by the server. The problem could be related to an invalid hostname, causing the browser to reject the session cookie.

    : Thanks for the suggestion, but I don't get back any cookies at all...
    Lekensteyn : In index.php, could you uncomment line 24 (`error_reporting(E_ALL)`) and place any error messages here? I've a feelng that you get `Cannot send session cookie - headers already sent`. How did you upload dotProjects files, and edit these?
    : Thanks again for the help... I've updated my question based on your suggestion. Regarding your question: I don't exactly understand what you mean by uploading dotProjects files (I unzipped them on a server locally).
    Lekensteyn : Did you strip any comments? Because line 24 in index.php contains the line `error_reporting(E_ALL);` (latest version from Sourceforge). If you don't mind, could you run `phpinfo();`, and upload the result somewhere?
    : Here you go: http://akosch.rewq.org/info.htm
    : Nevermind: got it working by tweaking php.ini... Thank you for your time!
    Lekensteyn : COuld you post your solution? Others might find it helpful too ;)
    : I needed to enable output_buffering in the php.ini file.
    From Lekensteyn

ERROR : OVM-2007 Oracle VM Agent is not active,

I have a server that I installed OVS(OVS Ver. 2.1) on it.

I useing https to connect to the server->

https://localhost:4443/OVS

but I can't create a server pool because this error appear :

OVM-2007 Oracle VM Agent (IP) is not active

All needed port is published and IP is pinging from console. I have to run ovsremaster.py manually. what cause this error ? connection has problem ? access to server is limited ?

. Thanks.

  • ports that needed to be open are open.

    but connection can't use them.

    I test, use telnet :

    telnet IP PORT

    and connection failed was result.

    Some port are open in iptables in same command, but just port 22 is respond and other ones are seem that still close. What Can I do now ?

    From abbas88

Disallow running a program as the wrong user

Using Windows 2008 Server, how can I allow a particular application to be run by one specific user, and prevent it from being run by any other user?

The scope is the local machine (the server) - I'm not concerned about a Windows domain. There is one, but I am only trying to apply restrictions for users logged into the physical server, or logged in via remote desktop.

  • Set the file permissions of the main program file to deny for everyone, and to full control only for the user you want to allow.

    Joel : Thanks. I realised this quickly after posting the question, and answered my own question. I'll accept this one.
  • This was simpler than I thought it would be.

    It was just a matter of right-clicking the executable, Properties, Security tab. Removed execute permissions for all users. Added execute permission the specific user.

    That did the job. Didn't need to mess with group policy after all.

    From Joel

linux network configruation wizard

i build linux Virtual appliance and i want to run wizard to configure the network interface at the first time startup
wizard like that run at linux installation wizard

does any one has any suggestion ?

  • CentOS (and by extension Red Hat) systems, have a great tool for this:

    /usr/sbin/sys-unconfig

    It basically forces you to reconfigure timezone/network/name etc on startup. Designed for when you move a server.

    Yeah.. found that out by accident.. lol

    DESCRIPTION sys-unconfig provides a simple method of reconfiguring a system in a new enviroment. Upon executing sys-unconfig will halt your system, and run the following configuration programs at boot: passwd (to change the root password), netconfig, timeconfig, kbdconfig, authconfig, and ntsysv.

    FILES /.unconfigured The presence of this file will cause /etc/rc.d/rc.sysinit to run the programs mentioned above.

    Prix : he is looking for ubuntu tools, take a look at the question and comments.
    Grizly : true, forgot to look at the tag.. my bad.
    From Grizly
  • Looks like there's no such wizard in ubuntu. You can use any method provided in this wiki: http://ubuntuguide.org/wiki/Ubuntu:Lucid#Networking

    From DukeLion