Friday, January 28, 2011

ethtool, WOL: What does "wake on physical activity" actually mean and (how) can I use it?

Hi everyone.

I fighting with the WOL settings of my Ubuntu box at the moment. The idea is to have an HTTP/SVN server to sleep while it's unused and wake up when it's accessed. So far, wake-on-LAN works and is activated on startup:

Settings for eth1:
        Supported ports: [ TP ]
        Supported link modes:   10baseT/Half 10baseT/Full
                                100baseT/Half 100baseT/Full
                                1000baseT/Full
        Supports auto-negotiation: Yes
        Advertised link modes:  Not reported
        Advertised pause frame use: No
        Advertised auto-negotiation: Yes
        Speed: 1000Mb/s
        Duplex: Full
        Port: Twisted Pair
        PHYAD: 0
        Transceiver: internal
        Auto-negotiation: on
        MDI-X: Unknown
        Supports Wake-on: pg
        Wake-on: pg
        Current message level: 0x0000003f (63)
        Link detected: yes

As you can see, I also set the wol p flag ('wake on physical activity'). My assumption was that I could convince the device to wake up not only on magic packets, but on any network access. This, however, seems to be wrong.

What does this flag mean then, and: (How) can I misuse this for my evil plans?

-- Markus (cross-post)

  • WOL typically requires a "magic packet" to actually "wake-up" a WOL system when it is "sleeping" or in an "off" state. The "magic packet" is a specific "message" instead of just any casual network activity.

    Some NICs have advanced power management features where it can "wake" a system based on network activity, but like WOL not all models necessarily have this capability.

    From

Using the free space on an ESXi datastore with an existing Centos VM of limited size

I have ESXi 4 on a home-built server with a 925GB RAID5 array (adaptec 2405 card) with a single Centos5.4 VM running on it. The VM has a provisioned size of 20GB which I can't seem to increase using the vSphere client. Now I would like to either increase the size of the Centos VM to use more of the free space or I would like to somehow use the datastore as another volume that can be easily accessed from the Centos VM. Does anyone know how to achieve this? Thanks.

  • What have you tried from the vSphere client?

    Is the virtual machine powered off? Are there any snapshots on the virtual machine?

    The machine needs to be powered off and without snapshots before resizing the drive. After you resize the virtual disk you can use gParted and/or LVM tools to resize the partition.

    From andyh_ky

Need help movig DNS providers (ZoneEdit to DynDNS)

I have a domain name and I use GoDaddy as my registar. My DNS provider is ZoneEdit. I'm staying with GoDaddy as my registrar, for now, but I want to switch DNS providers to DnynDNS. I already purchased an account with DynDNS.

In the process of switching, I want to avoid any downtime, especially when it comes to email (MX records). I'm a bit nervous and I want to double check with someone who has done something like this before that I'm doing things right. I'm especially interested in someone familiar with DynDNS and Google Apps which is where I host my e-mail. I'm somewhat confused by the priority settings for MX records. Also, do you suggest simply going to GoDaddy and changing my settings to point to DynDNS or should I do something else to avoid downtime?

Could you make sure that what I have with ZoneEdit: alt text

Here is what I have entered into DynDNS: alt text

Thanks!

    1. Why is the TTL for your A records only 60 seconds? Make them 3600 seconds like the rest of your DNS records. There's no valid reason to have the TTL so short.

    2. Is your domain name a secret? Why black it out? Do you think it's some kind of security risk to let us see what your domain name is?

    3. The records you have setup at DynDNS look fine to me (except for the aforementioned TTL of the A records), although I would suggest setting the MX priority to be the same as it's currently set up.

    andresmh : Not sure why only 60 seconds. Is it just adding extra pressure to the DNS server?
    From joeqwerty
  • DNS MX record priorities are really just what they sound like. You give each record a priority (the lower the number the higher the priority) and in theory mail servers try the highest priority record first, and if it doesn't respond they try the next one, etc.

    I'm a little unsure why you've got records listed from 10 to 70, have a look at this for the relative priorities between the servers - http://www.google.com/support/a/bin/answer.py?answer=174125

    If Zoneedit and Dyn will both be hosting your DNS records (at least for the immediate future) you should be fine, just change the name server delegation at Godaddy and after a while (depending how often the root reloads) you'll see queries will start using the Dyn DNS servers.

    It's one of those things that seems a lot scarier than it actually is so long as you do it right.

    andresmh : thanks for the link to google,the priorities I had were values that I had gotten from some other google FAQ, weird that they suggest such diversity of values.
    From Hutch

kernel software trap handling

I'm reading a book on Windows Internals and there's something I don't understand:

"The kernel handles software interrupts either as part of hardware interrupt handling or synchronously when a thread invokes kernel functions related to the software interrupt."

So does this mean that software interrupts or exceptions will only be handled under these conditions:

a. When the kernel is executing a function from said thread related to the software exception(trap) b. when it is already handling a hardware trap

Is my understanding of this correct?

The next bit:

"In most cases, the kernel installs front-end trap handling functions that perform general trap handling tasks before and after transferring control to other functions that field the trap."

I don't quite understand what it means by 'front-end trap handling functions' and 'field the trap'?

Can anyone help me?

  • This smells like homework, but I'll take a stab at it.

    What it sounds like to me is it is saying that software interrupts (which are not the same as exceptions, perhaps!) at non-deterministic times. Basically, the OS wants to be efficient, and may handle them as part of another interrupt (hardware it seems) or when you enter kernel space (say, make a kernel request.)

    As for the front-end trap handling, that's more or less saying that the kernel gets a shot before and after each trap that is handled. For instance, some are not able to be passed on to user code no matter how hard you try. The kernel just handles it and never lets user code touch it. Others might be as simple as setting up a different stack to handle the interrupt, then letting user code take a stab at it. If none of the user-level code handles it, then it will eventually take some default action.

    Tony : This is not homework. Doing a self study.
    Michael Graff : No, I mean it sounds like homework I've had in my past :)
  • To me, it seems to be saying that software interrupts are handled either A) in the same part of kernel code as hardware interrupts, or B) the kernel doesn't do anything at the moment when the software interrupt occurs, but will remember that the interrupt occurred, and when a function related to the software interrupt is called, will handle the interrupt then.

    Windows has something called "Deferred Procedure Call" (DPC) where the bulk of interrupt processing is deferred until a convenient time. It does this because x86 CPU's only have one IRQ line that is multiplexed by an external PIC or APIC. When an IRQ is triggered, the CPU automatically disables IRQs until the interrupt service routine reenables them. But since there is only one IRQ line, that means when IRQs are disabled, that means all IRQs are disabled. x86 architecture has a lot of devices using IRQs so that means, really, that the system (or at least that particular CPU) is sort of held hostage during the time IRQs are disabled. Thus, the DPC mechanism exists to ensure that IRQs are turned off for the least time necessary. The ideal thing is for the ISR to do the absolute minimum processing necessary before reenabling IRQs and shift the rest of the work to a DPC.

    I could be wrong, but I think software interrupts disable IRQs automatically as well. So even though a software interrupt doesn't have I/O to service, it's still causing the system/single CPU to not be able service other interrupts until the interrupt handler reenables them.

    System calls using the assembly language INT instruction are software interrupts (unless Windows uses a different method now like Linux with it's linux-gate.so trick), as well as CPU exceptions including page faults, divide by zero,

    So all interrupts are handled asychronously in Windows and any operating system really, I think, for the above reasons. I'm not a kernel expert or anything so just take the above as some insight.

Openvz IPV6 Question

Hi

Is it possible to assign a /64 range of ipv6 ips to an openvz container?

  • That's certainly possible - why should it not? T

    he next question might be whether there would be autoconfiguration for the virtual machines. Leaving aside the fact that you shouldn't be using autoconfiguration for server machines: whether it would work depends on whether you use venet or veth. With veth, each VM has an ethernet address, and you can run radvd in the container. With venet, stateless autoconfiguration will not work, but explicitly assigned addresses will.

    My recommended configuration is to assign :: to each VM, assuming they are all dual-stack.

IPv6 in a Windows XP/Windows 2003 Network

Has anyone successfully deployed an IPv6 network in an environment with Windows XP SP3 workstations and Windows 2003 R2 servers? Are there any operating system issues one needs to consider? Does the windows firewall still work under Windows XP?

  • Despite Juliano's observation, I answer the first question with "yes": I have done so. In XP and W2k3R2, there are certainly still IPv6 issues. For example, the DNS server support is limited (it supports the records, but not in the "nice" way in which W2k8 does it). The biggest problem is that RDP still doesn't use IPv6, so Linux RDP clients often fail to connect (especially over SSH tunnels). The firewall works fine, AFAIK.

IPV6 testing for embedded devices

I would like to test the IPV6 stack on an embedded system. How can I do that ? Would establishing a test link between this device and another PC be a "good enough" test ?

I was thinking about using a tool like socat to establish the link.

Should I ask this here or on stackoverflow ?

  • Depends on the reason for performing the test. If you want to know whether the device can provide a certain function, you should test whether that function actually works. If the function is "can establish IPv6 TCP connections", then this is what you should test - connecting to a PC would then be good enough.

    If you want to certify IPv6 support for the device, this test certainly wouldn't be sufficient. There are feature lists for IPv6 tests that are used in certification, e.g. the IPv6 Ready Logo.

same email account with different domains in exchnage 2010

i have one exchange 2010 server installed.

i want to add about 4 to 5 domains on this exchanger server as trusted domains.

but my real issue is

i can't add same email id for different accounts.

for example

           info@domain.com
           info@domain2.com
           info@domain3.com

how can i do this ?

Thanks

  • If the delivery for all the email addresses will be to the same "info" account, then the address spaces can be added to that account's addresses.

    From
  • If these are all going to be processed by the same person, there's no need to create a new user for each of the addresses. In the Exchange Management Console, open up the properties of your info user and go to the Email Addresses tab. From there, add an SMTP address and add info@domain2.com and info@domain3.com as aliases.

    If for example info@domain.com and info@domain2.com are managed by different people, you can either add these aliases to individual users (as described above) or you can create a separate user or distribution list to handle these addresses (e.g a user called Info at domain2.com).

    air : actually these emails are for different persons, and what i do is create user in active directory and then create email account in exchange server, but in active directory i can't create two users with same name....
    Ben : Like I said, create a user called `Info at domain2.com` and add the appropriate alias.
    air : sorry i did't understand.....
    From Ben

Easiest tunneling windows http via Ubuntu server

Hi. I am trying to set up the simplest system to be able to proxy from my Firefox via one of our Ubuntu servers.

Initially http/s ports would be enough and it would only happen from 2 concrete IPs (office and home). The server already has a complex IPTables firewall configuration so I really don't want to go via the Squid or Shorewall routes that I've seen published here. I do not need that many features, ACL, cache, etc... just sufficient IPTables rules (or alternative software) so I can set up a proxy on my Firefox and connect via that server. I know an SSH tunnel can be done but no idea how to make Firefox speak with my local SSH and use it as a proxy.

Any help or links would be appreciated.

  • EDIT: For windows, you can try

    http://blogs.techrepublic.com.com/security/?p=421

    Below instructions for Linux :)

    Set up a dynamic proxy using ssh:

    ssh -D 8080 yourserver
    

    Update the proxy settings in firefox. Look under Preferences, Advanced icon, Network tab, then the Settings button under connection. Change your proxy connection to manual, then put 127.0.0.1 as your SOCKS host and the port as whatever you used in the ssh -D command.

    You can script it all up by creating a second Firefox profile, let's say it's called "proxy". Then set up a script to handle it all:

    #!/bin/bash
    ssh -N -D 8080 yourserver &
    firefox -no-remote -P proxy
    kill %1
    

    I'll leave it up to you to decide if this is all within the bounds of your local security policy.

    Steven Monai : +1, but I have 2 notes: (1) Although the particular port number used doesn't really matter, the canonical SOCKS proxy port is 1080, not 8080. (2) Go into Firefox's `about:config` and set `network.proxy.socks_remote_dns` to `true`. This will force DNS queries to resolve from the remote side of the proxy (so that DNS query traffic appears to come from the same place as the browser requests).
    From Cakemox

IP/host-based traffic grapher

Do you know/recommend any tool for graphing traffic based on IP or host? In this case, I will get a graph for every IP that passes through my gateway. Of course, I will use a predefined list of source IP addresses. This is useful to monitor the traffic usage of all hosts inside your network.

I am using nagios grapher to graph the network traffic, but this tool graph the total traffic passing through the system interface(s). Nagios uses a script to monitor the traffic of the interface. After that, the traffic data is passed to nagios grapher.

I need to install such a tool on my Linux server/gateway.

  • You might take a look at ntop.

    zerolagtime : Why is "look at ntop" or "look at atop" the common answer here? What the user is trying to do is **visualize** their traffic, not look at a boring chart or linear graph. I saw some responses to similar traffic lately that says that `iptables` is able to do some accounting. Then, someone is going to have to poll those statistics and visualize them. Standard model-view-controller problem. Model the data using iptables, view as a graph, controller directs selected data to the graph.
    : ntop can visualize traffic based on IPs as described. zerolagtime, do you know of something that works better that you can contribute?
    From

Low cost WSUS Alternative?

We currently use WSUS and it's fantastic for our workstations and keeping track of what updates our servers need.

I'd sooner not automatically have WSUS install patches in the small hours to our servers, but I would like to be able to click a "update these servers now" button, but without the time involved logging onto each server, firing up IE, going to Windows Update etc.

Are there any suggestions on low cost ways to achieve this please?

I'm aware of Shavlik but across a couple of dozen servers it's not the cheapest option.

If it's relevant the servers are almost all VM's on vSphere.

Thanks a lot.

  • You can configure your group policy so that updates you have approved are downloaded to your servers, and you simply need to log in and install the waiting updates.

    You still have to log in to each server, but you don't have to do the Windows Update palaver, and you retain your centralised control of which updates to install, and reporting of which updates are pending.

    Hutch : Thanks, still a bit manual though. I'm looking at VMware Update Manager as that seems to have a right-click "do it now on these VM's" option from what I can see.
    alharaka : @hmallet I saw forget logging in directly, and just run a psexec process or WinRM process with a scheduled task that applies again a pre-determined list of computers. However, I should practice what I preach.
    From hmallett
  • Well, I can imagine one way, but the ease of doing it easily depends on which version of Windows Server you are running, or more specifically, whether you will be doing this with or without PowerShell.

    If you understand WSUS, and I hope others understand it better than me, you know all it is doing is a proxy, and if configured to load updates from your WSUS server, a cache as well. It then periodically communicates with the clients to check which updates installed and others that failed, recording that info in a central database that make pretty reports. If you break it down into these components, you can see there is hope in making a free alternative for yourself, but you will need to put all the right pieces into place, so long as the caching portion is not a necessity for you and you will let clients talk upstream to Microsoft directly.

    • Set all clients to automatically receive updates with Group Policy to your liking; I assume you have it if you were using WSUS (since they are both "big boy" tools, to use my own term).
    • Use VBScript or PowerShell (obviously the latter is easier, hence my original comment) to directly call the update API, or use Powershell, or use either to wrap around a freeware utility like WUInstall to do it for you (I would check licensing for the option though, since there is a Pro pay-for version as well). As for the first two, people have asked before on SF.com, look around. Someone asked about this before, and I doubt it is the first or last time.
    • A server to to host the database with the status of all your clients. It can be anything really, but MySQL or PostgreSQL would be cheapest; you could just use ODBC with VBS or PowerShell, depending on how much headache you want. If you have a SQL Server instance running, I assume you could talk to that. All you need is something simple to do CRUD operations. A co-worker has done something similar, albeit simpler, to record logons for different subsets of computers across our site and update a MySQL database he queries using phpMyAdmin. This presumes you do not need a pretty interface, and would need reporting for you and your team, not some manager type.
    • Script the clients to communicate with the server and update the database with their updates installed and errors, etc.. Again, you could use a couple open-source tools and change the database organization to get the same impact, I imagine.

    Now, I am sure there will be limitations.

    • It sounds like you can script the Windows Update API to force only Critical updates. However, the fine-grained control you are looking for would probably be lost unless you invest more time researching the Update API (since that is what WSUS relies on anyway) or look for better freeware tools, or even write your own. If you can figure it out, you can probably blacklist installation of certain KB ID's you want to be skipped. Again, that is how everyone else must do it.
    • Sexy-looking reports, as I alluded to before.
    • Many others I forget.

    Some benefits:

    • Better performance (I personally think despite its charms, WSUS requires a lot of overhead for a very simple operation).
    • Better pushing of client error logging (I come from a WSUS 3.0 shop, and I think it is annoying I often get errors from an update that tell me to check the Event Viewer on the client, which happened to me often). What are we paying for then? I feel like a custom-built solution gives you the power to do better in this department.
    • More flexibility in how to process the same computer with a different image or vice versa. We work in an environment with a lot of computers, and re-image often; some idiot techs use badly constructed images. As a result, SIDs can be a problem, despite Microsoft assuring us otherwise in all other crap they make except WSUS, according to Mark Russinovich. Now, it is my impression from limited WSUS experience you will have a lot of unique records for the same computer after re-imaging (instead of doing something cool and detecting the same hostname with a different SID or something) or you would have a big mess when techs do not sysprep images properly (we had that happen recently in a big department with their own tech).
    • A ton of others I forget.

    So, in short, you can see I too have thought of this. With a little bit of know-how, you might be able to do something cool on your own. I realize this is a tall order, but I think this would be the cheapest route imaginable.

    alharaka : Also, you could simply push the scripts with psexec or WinRM on the client servers, if these are very new Server 2008 installs. I just wanted to go into detail to illustrate the real sugar of WSUS, which is generating reports for picture-happy manager types. Hope this is informative.
    Kara Marfia : Thanks! Well thought out, and a great list of reference tools/links.
    From alharaka

Why did my cron job run twice when the clocks went back?

The clocks went back an hour last night at 2am - British Summer Time ended. My backup job is scheduled to run daily at 01:12. It ran twice. This is on a Debian Lenny server.

man cron says:

if the time has moved backwards by less than 3 hours, those jobs that fall into the repeated time will not be re-run

The crontab entry is: 12 1 * * * /home/lawnjam/bin/backup.sh

What's going on?

  • Ah, turns out it's a Debian bug. http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=217836

    Fixed in cron 3.0pl1-109, but Lenny is still on 3.0pl1-105.

    hmallett : First reported in 2003! That's a speedy fix...
    Steven Monai : That's why I never schedule any cron jobs to run between 1am and 3am. Glad to see this bug will be gone in Debian Squeeze.
    From lawnjam

how i can share windows folder to linx by samba ? (step by step)

i have folder "rep_backup" and i want to share it to linx server .... how i can do it ?? step by step I install "SAMBA" to my linux server

  • Have a look at this link.

    From Khaled
  • Exactly how to install samba will depend on which arrangement of Linux and related tools you are running, so you need to add that information to you question.

    You are not actually looking to install the samba services on Linux if I am reading your question correctly (you want to access Windows shares from Linux and not currently access shares on Linux from Windows) - in which case what you want is support for the current filesystem driver. On Debian or Ubuntu and setups based upon them this can be installed with aptitude install smbfs. You don;t need the full samba service installed to access Windows shares - you only need that if you want the Linux server to publish shares itself.

    Once the current filesystem support is enabled you can mount shares using a command-line like mount -tcifs //<machine-name-or-address>/<share-name> <mount-point -ousername=<window-user-name>. For example on my network sudo mount -tcifs //dave/media /mnt/media/ -ousername=dspillett has my netbook (running Ubuntu) mount a share on my main desktop box (runninx XP) as /mnt/media. The filesystem type cifs is a newer version of smbfs, for accessing really old Windows setups you might need to replace -tcifs with -tsmbfs. When done you can disconnect the share with umount <mount-point>.

    The commands above will prompt for the Windows password. If you need to script mounting network shares then you need to provide that too. This can be done with -ousername=<window-user-name>,password=<windows-password> but this has security issues in a multi-user environment. Using -ocredentials=<credentials-file> is more secure but make sure that you ensure the credentials file can only be accessed by the right user(s) (i.e. root, or members of the admin group, and so on). This also allows you to setup the mount-points in /etc/fstab so the shares are connected to after each reboot and/or can be connected to using the shorthand mount <mount-point>. See the relevant man-page for more detail and extra options available.

Microsoft Exchange Server - Fails to receive e-mail from 'yahoo.com' - HELO Vs EHLO

All e-mail sent from a yahoo.com server to my domain fails to be correctly received. The difference between yahoo.com and other domains is that they open with HELO rather than EHLO and therefore attempt to send the data in one part rather than in chunks (as I understand it).

Looking at the wikipedia example communication it seems that we are conforming to the protocol our side. When we expect data to arrive with us however the yahoo server fails to reply leading to a timeout.

2010-10-27T09:17:15.52...-,,Local
2010-10-27T09:17:18.11...+,,
2010-10-27T09:17:18.11...*,SMTPSubmit SMTPAcceptAnySender SMTPAcceptAuthoritativeDomainSender AcceptRoutingHeaders,Set Session Permissions
2010-10-27T09:17:18.11...>,"220 remote.ourserver.co.uk Microsoft ESMTP MAIL Service ready at Wed, 27 Oct 2010 10:17:17 +0100",
2010-10-27T09:17:18.13...<,HELO omp1009.mail.ukl.yahoo.com,
2010-10-27T09:17:18.13...>,250 remote.ourserver.co.uk Hello [xxx.xxx.xxx.xxx],
2010-10-27T09:17:18.14...<,MAIL FROM:<xxx@yahoo.co.uk>,
2010-10-27T09:17:18.14...*,08CD4019188CB09A;2010-10-27T09:17:18.111Z;1,receiving message
2010-10-27T09:17:18.14...>,250 2.1.0 Sender OK,
2010-10-27T09:17:18.16...<,RCPT TO:<administrator@ourserver.co.uk>,
2010-10-27T09:17:18.16...>,250 2.1.5 Recipient OK,
2010-10-27T09:17:18.17...,<,DATA,
2010-10-27T09:17:18.17...,>,354 Start mail input; end with <CRLF>.<CRLF>,
2010-10-27T09:19:08.19...

I am stumped as to what to try, I attempted to manually telnet and send an e-mail but I get rejected as I don't have a valid server to send from (I get refused as potential junk).

I have access to the logs of the box but not easy access to the config. Any advice would be greatly appreciated.

  • It shouldn't have anything to do with the remote server sending HELO instead of EHLO, as Exchange server supports both. You can see in your log file entry that eveything is going smoothly until your Exchange server issues a 354 response, which means "give me the message", and then it dies. Is there more in the log after the last line that you've posted?

    From joeqwerty

Open Source projects that need mirrors?

I have an idle web server with unlimited traffic running until January 9th, 2011.

An answerer to this question had the idea of offering to host mirrors for Open Source projects.

Does anybody know of Open Source projects that have such massive mirroring needs that it's worth the effort to set another one up for this relatively short time span?

Does anybody know some sort of site or list for this purpose?

I hope this is not off-topic. I thought Server Fault would be the best place to ask, I apologize in advance if it isn't.

  • Facebook Mirror Site

    Mirror any of the projects on the facebook mirror site, Most projects are critical open source projects which can always use extra bandwidth.

    From r0h4n

How i can install samba to unix server ?

How i can install samba to unix server ? .... for example i can install openssh-client in linux by the following command :- apt-get install openssh-client how i can install samba in unix .... by command ??????

  • To install samba with apt-get

    sudo apt-get install samba smbfs

    EDIT

    Solaris 9 may already have samba installed to check run

    pkginfo | grep samba
    

    if it is installed you should see that the following packages are installed

    system SUNWsmbac samba - A Windows SMB/CIFS fileserver for UNIX (client)
    system SUNWsmbar samba - A Windows SMB/CIFS fileserver for UNIX (Root)
    system SUNWsmbau samba - A Windows SMB/CIFS fileserver for UNIX (Usr)
    

    if the packages are not installed then you will need to locate your original SUN installation media. To install the SAMBA packages above.

    Mount the relevant disk in your CD drive then

    cd /cdrom/cdrom0/Solaris_9/Product
    
    pkgadd -d . SUNWsmbac 
    pkgadd -d . SUNWsmbar
    pkgadd -d . SUNWsmbau
    

    I don't think that the bits of samba that you want (smbmount) are installed by default on Solaris though.

    Osama Ahmad : this in linux but how i can do it in unix please :)
    Iain : Which variant of unix ?
    Osama Ahmad : Sun Microsystems Inc. SunOS 5.9 Generic May 2002
    SvenW : Why do you accept an answer that doesn't actually answer your question?
    From Iain
  • solaris has installed samba in hp-ux swinstall -s /path/to/depot/PKGNAME.depot

    Osama Ahmad : i have Sun Microsystems Inc. SunOS 5.9 Generic May 2002
    Alexey Mykhailov : then you already have samba :)

exchange server 2010 with multiple domains

i have one exchange server 2010, which is working fine with one domain. my exchange is working as follows

  1. pop3 collector collect emails from one master catchall account and then deliver to exchange server, this working perfect.

now what i want to add another domain to same exchange, i have added new domain as trusted domain & email policy and this new domain email account works fine with internal emails.

now what i have done, i again forward new email account to same catchall account.

but if i send email from any other external email address email is bounce, i can see email receive by pop3 collector but bounce by exchange server.

to make you more clear let me explain logic on which i am working.

  i have 2 domains
  1. domain1.com  (catchall@domain1.com)
  2. domain2.com  (info@domain2.com -->catchall@domain1.com)

now on my machine with exchange server i have pop3 collector which collect all emails from catchall@domain1.com and forward to exchange 2010 server.

all emails to domain1.com is working perfect but when i send email to info@domain2.com this email redirect to catchall@domain1.com perfectly but when exchanger server receive this email, it bounce.

i have also study the url

link text

and follow the whole process but no success.

i also check that my DNS/MX is working fine as the bounce message is going from my exchange server.

EDIT

the only problem is with accepted domain, as email come to exchange server then bounce back.

i just try this today

i create one user called test, then i goto his properties --> email

there was only one email account test@domain2.com

i try to send email to test@domain2.com from internet (email bounce)

then again i go to test user properties --> email

and Add one email test@domain1.com

again u try to send email to t*est@domain1.com* from internet (email received)

i think the only problem is with accepted domain but in hub transport , it shows accepted

is there any way to check does domain is properly accepted or not in exchange 2010 server.

Thanks

  • Have you added the "info@domain2.com" address to the "catchall@domain1.com" account?

    air : yes, actually info@domain2.com redirect email to catchall@domain1.com
    : Is "info@domain2.com" in the address properties of the "catchall@domain1.com" mailbox? If not, it should be, so "catchall@domain1.com" can correctly receive redirected (not forwarded) emails addressed to the "info@domain2.com" address.
    From

Automatic folder backup from XP to Windows Server 2003

My HR/Finance manager stores important files & records in her Win XP desktop PC and she rarely copies all her files into our file server that runs on Windows Server 2003 (the last time she did it was 3 months ago). How do I automate folder(s) backup from her PC to the fileserver?

  • There are various options:

    The best would be to redirect her My Documents folder to a network share.

    If for some reason that's not appropriate, you could use one of the many tools that syncs files between locations - rsync, Second Copy, or whatever.

    From Ward
  • Folder Redirection, Roaming Profiles can handle the user files properly as long as there aren't additional application-specific storage locations containing important data.

    If it does warrant a 3rd party sync app, I use SyncBack Pro. It's relatively inexpensive & verifies write operations to prevent spreading corruption, which was a difficult feature to find.

    The presence of a drive with serial number foo can trigger the backup job, which helps to automate offline backup rotations to external devices similarly to what's often done with tapes.

    A post-job script can be set to eject the drive and keep the data safe.

    From NginUS

SSH over HTTP(S)

I have an apache server at work.com that only allows incoming HTTP and HTTPS requests over the usual port 80 and 443. These are the only ports that are open on the network.

I would like to be able to SSH in to the server when I am at home, but IT refuses to open port 22.

Is there a way to configure apache to accept SSH traffic at the address ssh.work.com and forward it to sshd on port 80?

  • see this guide on using corkscrew

  • There's sslh for this purpose.

    delivarator : This did the trick and was super easy to set up. The only thing I needed to do was tell apache to listen on a different port since sslh binds to 443. Thanks!
    From joschi
  • If you need shell access, you should just get it from your organization and not have to do weird stuff to get it.

    That being said, you can install webshell or anyterm and accomplish what you need, but do get your normal shell access or you might get into trouble. The webshell trick is nice for when you need to have a shell at your home machine from one of those proxied networks we all know.

  • Firstly, if you're going to try to run ssh on a non-standard port to defeat your local IT policies (wrong-headed as I feel they are), it's normal to use 443 rather than 80. This is because http is often proxied by organisations; this proxying is often done transparently, so it can be difficult to tell if it's happening. HTTPS, by contrast, being end-to-end encrypted, is usually impossible to proxy, so most organisations either don't bother, or have a proxy configured as a simple pass-through. This makes TCP/443 a safer choice for a non-standard ssh than TCP/80. Joschi's suggestion of sslh, i notice, is designed to co-exist with https rather than http.

    That said, ssh doesn't to the best of my knowledge support any kind of virtual host name support, so using ssh.work.com isn't going to work if that resolves to an IP address which is already running a real apache listener on TCP/443. If, however, you have a public IP address that you can spare for just this purpose on your work machine, you can configure sshd to run on port 443 with

    Port 443
    

    in sshd_config, and then just point a remote ssh client at your ip address with the -p 443 flag.

    If you can't spare an ip address for that, then sslh is your man.

    From MadHatter

Help! Why do blocked bots still waste bandwidth?

I've blocked a majority of bots that keep sending POST requests to my website, using .htaccess.

Each time one of these bots tries to access my website it receives a 403 forbidden error message.

My question is, why is my bandwidth usage still increasing if I've blocked them in my .htaccess file?

I was always under the impression that web hosting bandwidth is measured by the amount of data that my server sends, not by the amount that it receives.

Is there a way to configure my .htaccess file to just ignore these bots and not send back a status code?

By the way, I'm using a shared server with "unlimited" bandwidth, but the amount of bandwidth that these bots are wasting is rediculous.

Thanks!

  • By the time your .htaccess file takes effect, the client's payload has already passed the network interface of your server. From the point of view of your ISP there is basically no difference in in- or outgoing traffic from your server. You or your provider will have to pay it anyway.

    If you don't want any data transfer happen, you'll need to block the clients in the border router of your ISP (or at least in a packet filter which is not running on the server itself).

    From joschi
  • The .htaccess files tells your server what specific reply should be sent to these bots. So the request is still happening, and you are still sending back data (the 403 message).

    You have no way to prevent the botnet request to reach you, only your provider can block it before it reaches your server. However, you can send nothing back by simply closing the connection for this IP. I'm not sure if Apache has a module to do that, otherwise you can use a software firewall like iptables to do that.

    From Julien
  • As per the other answers, once the bot gets to the stage of getting a 403 response back, you have both sent and received data.

    It'd probably be better to not respond at all, if you can find an apache module to simply drop the connection.

    If you have control of the host AND use SYN cookies, it may be worth to also route all IPs you block to 127.0.0.1 (or, maybe even better, add them to a DROP rule in an iptables chain).

    bronzebeard : extending Vatine's answer, you could setup Squid ( or something similar ) and setup acls
    From Vatine
  • There are several approaches you can take. One is setting up firewall rules via iptables. Second is disallowing these bots on robot.txt

How big would a MySQL database be if I save all webpages' title and URL in it?

For learning purposes, I want to make a simple web indexer which crawls the web and saves all found pages in a MySQL database with their titles and URLs, with this table (the page's content is not saved):

  • id: integer AUTO_INCREMENT PRI
  • title: varchar(100)
  • url: varchar(500)

How big would that database be approximately? Is it about hundreds of MB, GB or around TBs? Thanks.

  • Hi Koning,

    For the quick and dirty answer, scroll the bottom. Otherwise, read through my narrative to understand how I came up with those numbers.

    In 2008, Google released some numbers that might be of interest of you. At that time, Google's spiders were aware of over 1 trillion (that's 1,000,000,000,000) unique URLs. One thing to take note of is that not all of these URLs are indexed. For your case here, we'll pretend that we are going to index everything. You can read this announcement here: http://googleblog.blogspot.com/2008/07/we-knew-web-was-big.html

    The current size of your id column only allows for 2 billion URLs in the index. If you make that an unsigned int you can squeeze 4 billion out, but assuming a near-infinite scale you'd want to use an unsigned bigint In all reality, you'd want to use a uuid or something similar so you can generate IDs concurrently (and from multiple hosts,) but for this exercise, we will assume that we are using an unsigned bigint.

    So, in theory, we've got this infinitely scalable MySQL table that is defined such as:

    • id: unsigned bigint AUTO_INCREMENT
    • title: varchar(100)
    • url: varchar(500)

    The storage requirements for each of these columns are:

    • id: 8 bytes
    • title: 100 + 1 = 101 bytes
    • url: 500 + 2 = 502 bytes
    • Row size*: 502 + 101 + 8 = 611 bytes (Neglecting overhead, table headers, indexes, etc)

    Reference: http://dev.mysql.com/doc/refman/5.0/en/storage-requirements.html

    Now, to get the theoretical table size we simply multiply by our 1 trillion unique URLs:

    611 bytes * 1,000,000,000,000 URLs = 611,000,000,000,000 bytes =~ 555.7 terabytes

    So there you have it. 1 trillion URLs times the storage size of the table we defined would take up almost 556 terabytes of data. We would also have to add data for indexes, table overhead, and some other things. Likewise, we could also subtract data because for our exercise I assumed each varchar column was being maxed out. I hope this helps.

    (Also, just a quick clarification: I know that bigint columns aren't near-infinite, but doing the math is easier when you're not worrying about logistics)

SQL Server 2008 express r2

I installed SQL server 2008 r2 on Windows server 2008 and its working fine but it takes to much memory. So I decided to install sql server 2008 r2 express. I tried it couple times and all the time Im getting error after a while of installing 'User doesn't have permissions ..' I tried two times and its showing me two partially installed sql server instances.

I have 3 questions: How to uninstall those two sql server express instances? How to install properly sql server express instance? - Which user account I need to specify for running sqlserver instance?

Thank you for detailed answer

  • You should be able to remove the instances using Add/Remove Programs. You'll need administrative rights to do the install, the permissions on the service account will depend on whether it needs access to domain resources, there is more info here

    As for the memory consumption of the Standard edition, the default behavior of SQL assumes it can use as much memory as it wants, but you can easily limit it through a configuration option.

    From SqlACID
  • How to uninstall those two sql server express instances?

    Log in as admin, uninstall. Never had to (normally you dont uninstall sql server). Like any other program. http://support.microsoft.com/kb/909967 has some information that is still relevant for 2008.

    How to install properly sql server express instance?

    Log in as local admin, start installer.

    Which user account I need to specify for running sqlserver instance?

    Depends on waht you WANT. I normally run them as local system or aspecial domain user, depending on security needs.

    I installed SQL server 2008 r2 on Windows server 2008 and its working fine but it takes to much memory.

    I bet it does NOT take too much memory (i.e. more memor than it should). SQL Server likes using all memory as cache because it assumes it is alone on the computer - which is a VERY valid assumption. Should this be false, thre are properties in teh server you can set that limit the memory use. So, per definition "too much memory" is more memory than the machine has (causing swapping) or you ahve defined. I really bet this is not the case.

    Eugene : Where I can find settings to configure memory usage?
    From TomTom

Fusion will not boot Win7 after ubuntu install

I can boot Win 7 and Ubuntu 9.10 boot fine natively.

When I try to boot win7 in Fusion I get this: GRUB loading. error: unknown filesystem, grup rescue>

After installing ubuntu 9.10 I followed this instructions on here and moved grub to the ubuntu partition. which allowed ubuntu to load, but not win 7.

i used the win 7 dvd to do a boot record repair on the drive, but still win 7 will not load in Fusion 3.01

to Chopper3:

i am trying to boot from 2 different VMs. i used the these commands to modify the 2 VMs. ./vmware-rawdiskCreator create /dev/disk1 2 "/temp.vmwarevm/ubuntu" ide ./vmware-rawdiskCreator create /dev/disk1 1 "/temp.vmwarevm/windows7" ide

I moved the *. vmdk files into their respective VMs.

so the win7 VM is pointing to partition 1 and the ubuntu VM is pointing to partition 2 of disk1.

launching the ubuntu VM works fine. launching the win7 VM i get the grub error.

One more edit:

sorry for not making it clearer.

I am using 2 VMs. i am not trying to boot from that same vmdk files.

I am trying to boot 2 different virtual machines to 2 different boot camp partitions. One VM boots the ubuntu boot camp partition. it works after following the directions posted in another post.

And one VM that boots the Windows 7 boot camp partition. gives me the grub error i posted earlier.

anyone have any ideas how to fix?

  • Are you trying to get W7 and Ubuntu to boot from within the same Fusion VM, it's not clear?

    If so, why? You could just have two VMs, one for each OS.

    If not please clarify.

    edit - thanks for the information, that said why on earth are you trying to boot two different OS VMs from the same vmdk file/s? or are you trying to do this from your bootcamp partition? if the former just use two separate vmdks and if the latter, well that isn't supported. If I've misread this again then please try to spell out exactly what you're trying to achieve please.

    edit2 - I'm aware that some have managed to get Parallels to do what you're after but I can't seem to find any help on getting Fusion to do the same, do you HAVE to have all three OS's in physical partitions?

    ToreTrygg : I've added some more information.
    ToreTrygg : Thanks. I've added more
    From Chopper3
  • Hi, I tried to have a partition, so that i could use Ubuntu 9.10 and Win 7. I used wubi. After running ubuntu from the usb i decided that i was going to install it, as in, make a partition, however, after ubuntu modified the partitions and whatnot (i dont fully understand what happened), i got the same error as mentioned in the beginning: GRUB loading. error: unknown filesystem, grup rescue>. So now i am screwed because i cant load win7 OR ubuntu, except if i do it from the usb. I'm in desperte need of help, i fear having messed up my new computer

    From Ernesto