.corp.sgi.com!pablo
Subject: Sybase FAQ: 2/16 - section 1
Date: 1 Sep 1997 06:00:40 GMT
Summary: Info about SQL Server, bcp, isql and other goodies
Posting-Frequency: monthly

Archive-name: databases/sybase-faq/part2
URL: http://reality.sgi.com/pablo/Sybase_FAQ

              Q1.1: HOW TO START/STOP SQL SERVER WHEN CPU REBOOTS
                                       
   
     _________________________________________________________________
   
   Below is an example of the various files (on _Irix_) that are needed
   to start/stop a SQL Server. The information can easily be extended to
   any UNIX platform.
   
   The idea is to allow as much flexibility to the two classes of
   administrators that admin the machine:
     * The System Administrator
     * The Database Administrator
       
   Any errors introduced by the DBA will not interfere with the System
   Administrator's job.
   
   With that in mind we have the system startup/shutdown file
   _/etc/init.d/sybase_ invoking a script defined by the DBA:
   _/usr/sybase/sys.config/{start,stop}.sybase_
   
_/etc/init.d/sybase_

   On some operating systems this file must be linked to a corresponding
   entry in _/etc/rc.0_ and _/etc/rc.2_ -- see _rc0(1M)_ and _rc2(1M)_
   

#!/bin/sh
# last modified:  10/17/95, sr.
#
# Make symbolic links so this file will be called during system stop/start.
# ln -s /etc/init.d/sybase /etc/rc0.d/K19sybase
# ln -s /etc/init.d/sybase /etc/rc2.d/S99sybase
# chkconfig -f sybase on

# Sybase System-wide configuration files
CONFIG=/usr/sybase/sys.config

if $IS_ON verbose ; then        # For a verbose startup and shutdown
        ECHO=echo
        VERBOSE=-v
else                            # For a quiet startup and shutdown
        ECHO=:
        VERBOSE=
fi

case "$1" in
'start')
        if $IS_ON sybase; then
                if [ -x $CONFIG/start.sybase ]; then
                   $ECHO "starting Sybase servers"
                   /bin/su - sybase -c "$CONFIG/start.sybase $VERBOSE &"
                else
                   <error condition>
                fi
        fi
        ;;

'stop')
        if $IS_ON sybase; then
                if [ -x $CONFIG/stop.sybase ]; then
                   $ECHO "stopping Sybase servers"
                   /bin/su - sybase -c "$CONFIG/stop.sybase $VERBOSE &"
                else
                   <error condition>
                fi
        fi
        ;;

*)
        echo "usage: $0 {start|stop}"
        ;;
esac

_/usr/sybase/sys.config/{start,stop}.sybase_

  start.sybase
  

#!/bin/sh -a

#
# Script to start sybase
#
# NOTE: different versions of sybase exist under /usr/sybase/{version}
#

# Determine if we need to spew our output
if [ "$1" != "spew" ] ; then
   OUTPUT=">/dev/null 2>&1"
else
   OUTPUT=""
fi

# 10.0.2 servers
HOME=/usr/sybase/10.0.2
cd $HOME

# Start the backup server
eval install/startserver -f install/RUN_BU_KEPLER_1002_52_01 $OUTPUT

# Start the dataservers
# Wait two seconds between starts to minimize trauma to CPU server
eval install/startserver -f install/RUN_FAC_WWOPR $OUTPUT
sleep 2
eval install/startserver -f install/RUN_MAG_LOAD $OUTPUT

exit 0

  stop.sybase
  

#!/bin/sh

#
# Script to stop sybase
#

# Determine if we need to spew our output
if [ -z "$1" ] ; then
   OUTPUT=">/dev/null 2>&1"
else
   OUTPUT="-v"
fi

eval killall -15 $OUTPUT dataserver backupserver sybmultbuf
sleep 2

# if they didn't die, kill 'em now...
eval killall -9 $OUTPUT dataserver backupserver sybmultbuf

exit 0

   If your platform doesn't support _killall_, it can easily be simulated
   as follows:
   

   #!/bin/sh

   #
   # Simple killall simulation...
   #    $1 = signal
   #    $2 = process_name
   #

   #
   # no error checking but assume first parameter is signal...
   # what ya want for free?  :-)
   #

   kill -$1 `ps -ef | fgrep $2 | fgrep -v fgrep | awk '{ print $1 }'`

   
     _________________________________________________________________

                      Q1.2: HOW TO CLEAR A _LOG SUSPEND_
                                       
   
     _________________________________________________________________
   
   A connection that is in a _log suspend_ state is there because the
   transaction that it was performing couldn't be logged. The reason it
   couldn't be logged is because the database transaction log is full.
   Typically, the connection that caused the log to fill is the one
   suspended. We'll get to that later.
   
   In order to clear the problem you must dump the transaction log. This
   can be done as follows:
   

dump tran _db_name_ to _data_device_
go

   At this point, any completed transactions will be flushed out to disk.
   If you don't care about the recoverability of the database, you can
   issue the following command:
   

dump tran _db_name_ with truncate_only

   If that doesn't work, you can use the _with no_log_ option instead of
   the _with truncate_only_.
   
   After successfully clearing the log the suspended connection(s) will
   resume.
   
   Unfortunately, as mentioned above, there is the situation where the
   connection that is suspended is the culprit that filled the log.
   Remember that dumping the log _only_ clears out completed transaction.
   If the connection filled the log with one large transaction, then
   dumping the log isn't going to clear the suspension.
   
   What you need to do is issue a SQL Server _kill_ command on the
   connection and then unsuspend it:
   

select lct_admin("unsuspend", db_id("_db_name_"))

  Retaining Pre-System 10 Behavior
  
   By setting a database's _abort xact on log full_ option, pre-System 10
   behavior can be retained. That is, if a connection cannot log its
   transaction to the log file, it is aborted by the SQL Server rather
   than suspended.
     _________________________________________________________________

                Q1.3: WHAT'S THE BEST VALUE FOR _CSCHEDSPINS_?
                                       
   
     _________________________________________________________________
   
   It is crucial to understand that _cschedspins_ is a tunable parameter
   (recommended values being between 1-2000) and the optimum value is
   completely dependent on the customer's environment. _cschedspins_ is
   used by the scheduler only when it finds that there are no runnable
   tasks. If there are no runnable tasks, the scheduler has two options:
    1. Let the engine go to sleep (which is done by an OS call) for a
       specified interval or until an event happens. This option assumes
       that tasks won't become runnable because of tasks executing on
       other engines. This would happen when the tasks are waiting for
       I/O more than any other resource such as locks. Which means that
       we could free up the CPU resource (by going to sleep) and let the
       system use it to expedite completion of system tasks including
       I/O.
    2. Go and look for a ready task again. This option assumes that a
       task would become runnable in the near term and so incurring the
       extra cost of an OS context switch through the OS sleep/wakeup
       mechanism is unacceptable. This scenario assumes that tasks are
       waiting on resources such as locks, which could free up because of
       tasks executing on other engines, more than they wait for I/O.
       
   _cschedspins_ controls how many times we would choose option 2 before
   choosing option 1. Setting _cschedspins_ low favors option 1 and
   setting it high favors option 2. Since an I/O intensive task mix fits
   in with option 1, setting _cschedspins_ low may be more beneficial.
   Similarly since a CPU intensive job mix favors option 2, setting
   _cschedspins_ high may be beneficial.
   
   The consensus is that a single cpu server should have _cschedspins_
   set to 1. However, I strongly recommend that users carefully test
   values for _cschedspins_ and monitor the results closely. I have seen
   more than one site that has shot themselves in the foot so to speak
   due to changing this parameter in production without a good
   understanding of their environment.
     _________________________________________________________________
Q1.4: Trace Flag Definitions

----------------------------------------------------------------------------

To activate trace flags, add them to the RUN_* script. The following example
is using the 1611 and 260 trace flags.

     Use of these traceflags is not recommended by Sybase. Please use
     at your own risk.

% cd ~sybase/install
% cat RUN_BLAND
#!/bin/sh
#
# SQL Server Information:
#  name:                          BLAND
#  master device:                 /usr/sybase/dbf/BLAND/master.dat
#  master device size:            25600
#  errorlog:                      /usr/sybase/install/errorlog_BLAND
#  interfaces:                    /usr/sybase
#
/usr/sybase/dataserver -d/usr/sybase/dbf/BLAND/master.dat \
-sBLAND -e/usr/sybase/install/errorlog_BLAND -i/usr/sybase \
-T1611 -T260

----------------------------------------------------------------------------

                                 Trace Flags

  Flag                              Description

 200   Displays messages about the before image of the query-tree.

 201   Displays messages about the after image of the query-tree.

 241   Compress all query-trees whenever the SQL dataserver is started.

       Reduce TDS (Tabular Data Stream) overhead in stored procedures.
       Turn off done-in-proc packets. Do not use this if your application
       is a ct-lib based application; it'll break.
 260
       Why set this on? Glad you asked, typically with a db-lib
       application a packet is sent back to the client for each batch
       executed within a stored procedure. This can be taxing in a WAN/LAN
       environment.

       This trace flag instructs the dataserver to not recompile a child
 299   stored procedure that inherits a temp table from a parent
       procedure.

 302   Print information about the optimizer's index selection.

 310   Print information about the optimizer's join selection.

 311   Display the expected IO to satisfy a query. Like statistics IO
       without actually executing.

 317   Provide extra optimization information.

 320   Turn off the join order heuristic.

 324   Turn off the like optimization for ad-hoc queries using
       @local_variables.

 602   Prints out diagnostic information for deadlock prevention.

 603   Prints out diagnostic information when avoiding deadlock.

 699   Turn off transaction logging for the entire SQL dataserver.

 1204* Send deadlock detection to the errorlog.

 1205  Stack trace on deadlock.

 1206  Disable lock promotion.

 1603* Use standard disk I/O (i.e. turn off asynchronous I/O).

 1605  Start secondary engines by hand

       Create a debug engine start file. This allows you to start up a
       debug engine which can access the server's shared memory for
       running diagnostics. I'm not sure how useful this is in a
 1606  production environment as the debugger often brings down the
       server. I'm not sure if Sybase have ported the debug stuff to
       10/11. Like most of their debug tools it started off quite strongly
       but was never developed.

       Startup only engine 0; use dbcc engine(online) to incrementally
 1608  bring up additional engines until the maximum number of configured
       engines.

 1610* Boot the SQL dataserver with TCP_NODELAY enabled.

 1611* If possible, pin shared memory -- check errorlog for
       success/failure.

 1613  Set affinity of the SQL dataserver engine's onto particular CPUs --
       usually pins engine 0 to processor 0, engine 1 to processor 1...

 1615  SGI only: turn on recoverability to filesystem devices.

 2512  Prevent dbcc from checking syslogs. Useful when you are constantly
       getting spurious allocation errors.

       Display each log record that is being processed during recovery.
 3300  You may wish to redirect stdout because it can be a lot of
       information.

 3500  Disable checkpointing.

 3502  Track checkpointing of databases in errorlog.

 3601  Stack trace when error raised.

 3604  Send dbcc output to screen.

 3605  Send dbcc output to errorlog.

 3607  Do not recover any database, clear tempdb, or start up checkpoint
       process.

 3608  Recover master only. Do not clear tempdb or start up checkpoint
       process.

 3609  Recover all databases. Do not clear tempdb or start up checkpoint
       process.

 3610  Pre-System 10 behavior: divide by zero to result in NULL instead of
       error - also see Q7.5.

 3620  Do not kill infected processes.

 4012  Don't spawn chkptproc.

 4013  Place a record in the errorlog for each login to the dataserver.

 4020  Boot without recover.

       Forces all I/O requests to go thru engine 0. This removes the
 5101  contention between processors but could create a bottleneck if
       engine 0 becomes busy with non-I/O tasks. For more
       information...5101/5102.

 5102  Prevents engine 0 from running any non-affinitied tasks. For more
       information...5101/5102.

 7103  Disable table lock promotion for text columns.

 8203  Display statement and transaction locks on a deadlock error.

 *     Starting with System 11 these are sp_configure'able

----------------------------------------------------------------------------

                      Q1.5: TRACE FLAGS -- 5101 AND 5102
                                       
   
     _________________________________________________________________
   
  5101
  
   Normally, each engine issues and checks for its own Disk I/O on behalf
   of the tasks it runs. In completely symmetric operating systems, this
   behavior provides maximum I/O throughput for SQL Server. Some
   operating systems are not completely symmetic in their Disk I/O
   routines. For these environments, the server can be booted with the
   5101 trace flag. While tasks still request disk I/O from any engine,
   the actual request to/from the OS is performed by engine 0. The
   performance benefit comes from the reduced or eliminated contention on
   the locking mechanism inside the OS kernel. To enable I/O affinity to
   engine 0, start SQL Server with the 5101 Trace Flag.
   
   Your errorlog will indicate the use of this option with the message:

        Disk I/O affinitied to engine: 0

   This trace flag only provides performance gains for servers with 3 or
   more dataserver engines configured and being significantly utilized.
   
   _Use of this trace flag with fully symmetric operating systems will
   degrade performance!_
   
  5102
  
    The 5102 trace flag prevents engine 0 from running any non-affinitied
   tasks. Normally, this forces engine 0 to perform Network I/O only.
   Applications with heavy result set requirements (either large results
   or many connections issuing short, fast requests) may benefit. This
   effectively eliminates the normal latency for engine 0 to complete
   running its user thread before it issues the network I/O to the
   underlying network transport driver. If used in conjuction with the
   5101 trace flag, engine 0 would perform all Disk I/O and Network I/O.
   For environments with heavy disk and network I/O, engine 0 could
   easily saturate when only the 5101 flag is in use. This flag allows
   engine 0 to concentrate on I/O by not allowing it to run user tasks.
   To force task affinity off engine 0, start SQL Server with the 5102
   Trace Flag.
   
   Your errorlog will indicate the use of this option with the message:

        I/O only enabled for engine: 0

   
     _________________________________________________________________
   
   _Warning: Not supported by Sybase. Provided here for your enjoyment._

                      Q1.6: WHAT IS _CMAXPKTSZ_ GOOD FOR?
                                       
   
     _________________________________________________________________
   
   _cmaxpktsz_ corresponds to the parameter "maximum network packet size"
   which you can see through _sp_configure_. I recommend only updating
   this value through _sp_configure_. If some of your applications send
   or receive large amounts of data across the network, these
   applications can achieve significant performance improvement by using
   larger packet sizes. Two examples are large bulk copy operations and
   applications reading or writing large text or image values. Generally,
   you want to keep the value of default network packet size small for
   users performing short queries, and allow users who send or receive
   large volumes of data to request larger packet sizes by setting the
   maximum network packet size configuration variable.
   
   _caddnetmem_ corresponds to the parameter "additional netmem" which
   you can see through _sp_configure_. Again, I recommend only updating
   this value through _sp_configure_. "additional netmem" sets the
   maximum size of additional memory that can be used for network packets
   that are larger than SQL Server's default packet size. The default
   value for additional netmem is 0, which means that no extra space has
   been allocated for large packets. See the discussion below, under
   maximum network packet size, for information on setting this
   configuration variable. Memory allocated with additional netmem is
   added to the memory allocated by memory. It does not affect other SQL
   Server memory uses.
   
   SQL Server guarantees that every user connection will be able to log
   in at the default packet size. If you increase maximum network packet
   size and additional netmem remains set to 0, clients cannot use packet
   sizes that are larger than the default size: all allocated network
   memory will be reserved for users at the default size. In this
   situation, users who request a large packet size when they log in
   receive a warning message telling them that their application will use
   the default size. To determine the value for additional netmem if your
   applications use larger packet sizes:
     * Estimate the number of simultaneous users who will request the
       large packet sizes, and the sizes their applications will request.
     * Multiply this sum by three, since each connection needs three
       buffers.
     * Add 2% for overhead, rounded up to the next multiple of 512
       
   
     _________________________________________________________________

            Q1.7: HOW DO I MOVE _TEMPDB_ OFF OF THE MASTER DEVICE?
                                       
   
     _________________________________________________________________
   
     I received a message from Sybase TS recommending that the FAQ no
     longer advocate the physical removal of entries from the
     _sysusages/sysdatabases_ tables. It makes recovery _extremely_
     painful.
     
     After reviewing their write-up I agree.
     
  A quick alternative - Sybase TS Preferred Method
  
   This is the Sybase TS method of removing _most_ activity off of the
   master device:
    1. Alter tempdb on another device:

 1> alter database tempdb on ...
 2> go
    2. Use the tempdb:

 1> use tempdb
 2> go
    3. Drop the segments:

 1> sp_dropsegment "default", tempdb, master
 2> go
 1> sp_dropsegment "logsegment", tempdb, master
 2> go
 1> sp_dropsegment "system", tempdb, master
 2> go

     Note that there is still _some_ activity on the master device. On a
     three connection test that I ran:

   while ( 1 = 1 )
   begin
      create table #x (col_a int)
      drop table #x
   end

     there was one write per second. Not bad.
     
   
     _________________________________________________________________

                  Q1.8: BUILDMASTER CONFIGURATION DEFINITIONS
                                       
   
     _________________________________________________________________
   
     _Attention!_ Please notice, be very careful with these parameters.
     Use only at your own risk. Be sure to have a copy of the original
     parameters. Be sure to have a dump of all dbs (include master)
     handy.
     
   
     _________________________________________________________________
   
   
   
   The following is a list of configuration parameters and their effect
   on the SQL Server. Changes to these parameters can affect performance
   of the server. Sybase does not recommend modifying these parameters
   without first discussing the change with Sybase Tech Support. This
   list is provided for information only.
   
   These are categorized into two kinds:
     * Configurable through sp_configure and
     * not configurable but can be changed through 'buildmaster
       -y<variable>=value -d<dbdevice>'
       
   
   
  Configurable variables:
  
   crecinterval:
   
   The recovery interval specified in minutes.
   
   ccatalogupdates:
   
   A flag to inform whether system catalogs can be updated or not.
   
   cusrconnections:

                This is the number of user connections allowed in SQL
                Server.  This value + 3 (one for checkpoint, network
                and mirror handlers) make the number of pss configured
                in the server.

   
     _________________________________________________________________
   
   cfgpss:

                Number of PSS configured in the server. This value will
                always be 3 more than cusrconnections. The reason is we
                need PSS for checkpoint, network and mirror handlers.

                THIS IS NOT CONFIGURABLE.

   
     _________________________________________________________________
   
   cmemsize:

                The total memory configured for the Server in 2k
                units.  This is the memory the server will use for both
                Server and Kernel Structures.  For Stratus or any 4k
                pagesize implementation of SQL Server, certain values
                will change as appropriate.

   
   
   cdbnum:

                This is the number of databases that can be open in SQL
                Server at any given time.

   
   
   clocknum:

                Variable that defines and controls the number of logical
                locks configured in the system.

   
   
   cdesnum:

                This is the number of open objects that can be open at
                a given point of time.

   
   
   cpcacheprcnt:

                This is the percentage of cache that should be used
                for procedures to be cached in.

   
   
   cfillfactor:
   
   Fill factor for indexes.
   
   ctimeslice:

                This value is in units of milli-seconds. This value determines
                how much time a task is allowed to run before it yields.
                This value is internally converted to ticks. See below
                the explanations for cclkrate, ctimemax etc.

   
   
   ccrdatabasesize:

                The default size of the database when it is created.
                This value is Megabytes and the default is 2Meg.

   
   
   ctappreten:
   
   An outdated not used variable.
   
   crecoveryflags:

                A toggle flag which will display certain recovery information
                during database recoveries.

   
   
   cserialno:

                An informational variable that stores the serial number
                of the product.

   
   
   cnestedtriggers:
   
   Flag that controls whether nested triggers allowed or not.
   
   cnvdisks:

                Variable that controls the number of device structures
                that are allocated which affects the number of devices
                that can be opened during server boot up. If user
                defined 20 devices and this value is configured to be
                10, during recovery only 10 devices will be opened and
                the rest will get errors.
cfgsitebuf:
                This variable controls maximum number of site handler
                structures that will be allocated. This in turn
                controls the number of site handlers that can be
                active at a given instance.
cfgrembufs:
                This variable controls the number of remote buffers
                that needs to send and receive from remote sites.
                Actually this value should be set to number of
                logical connections configured. (See  below)
cfglogconn:
                This is the number of logical connections that can
                be open at any instance. This value controls
                the number of resource structure allocated and
                hence it will affect the overall logical connection
                combined with different sites. THIS IS NOT PER SITE.

   
   
   cfgdatabuf:

                Maximum number of pre-read packets per logical connections.
                If logical connection is set to 10, and cfgdatabuf is set
                to 3 then the number of resources allocated will be
                30.

   
   
   cfupgradeversion:
   
   Version number of last upgrade program ran on this server.
   
   csortord:
   
   Sort order of the SQL Server.
   
   cold_sortdord:

                When sort orders are changed the old sort order is
                saved in this variable to be used during recovery
                of the database after the Server is rebooted with
                the sort order change.

   
   
   ccharset:
   
   Character Set used by the SQL server
   
   cold_charset:

                Same as cold_sortord except it stores the previous
                Character Set.

   
     _________________________________________________________________
   
   cdflt_sortord:

                page # of sort order image definition. This should
                not be changed at any point. This is a server only
                variable.

   
   
   cdflt_charset:

                page # of character set image definition. This should
                not be changed at any point. This is a server only
                variable.

   
   
   cold_dflt_sortord:

                page # of previous sort order image definition. This
                should not be changed at any point. This is a server
                only variable.

   
   
   cold_dflt_charset:

                page # of previous chracter set image definition. This
                should not be changed at any point.  This is a server
                only variable.

   
     _________________________________________________________________
   
   
   
   cdeflang:
   
   Default language used by SQL Server.
   
   cmaxonline:

        Maximum number of engines that can be made online. This
        number should not be more than the # of cpus available on this
        system. On Single CPU system like RS6000 this value is always
        1.

   
   
   cminonline:

        Minimum number of engines that should be online. This is 1 by
        default.

   
   
   cengadjinterval:
   
   A noop variable at this time.
   
   cfgstacksz:

        Stack size per task configured. This doesn't include the guard
        area of the stack space. The guard area can be altered through
        cguardsz.

   
     _________________________________________________________________
   
   cguardsz:

        This is the size of the guard area. The Sql Server will
        allocate stack space for each task by adding cfgstacksz
        (configurable through sp_configure) and cguardsz (default is
        2K).  This has to be a multiple of PAGESIZE which will be 2k
        or 4k depending on the implementation.

   
   
   cstacksz:

        Size of fixed stack space allocated per task including the
        guard area.

   
     _________________________________________________________________
   
   
   
   Non-configurable values :
     _________________________________________________________________
   
   _TIMESLICE, CTIMEMAX ETC:_
     _________________________________________________________________
   
   
   
   1 millisecond = 1/1000th of a second.
   1 microsecond = 1/1000000th of a second. "Tick" : Interval between two
   clock interrupts occur in real time. 
   
  "cclkrate" :
  
   

        A value specified in microsecond units.
        Normally on systems where a fine grained timer is not available
        or if the Operating System cannot set sub-second alarms, this
        value is set to 1000000 milliseconds which is 1 second. In
        other words an alarm will go off every 1 second or you will
        get 1 tick per second.

        On Sun4 this is set to 100000 milliseconds which will result in
        an interrupt going at 1/10th of a second. You will get 6 ticks
        per second.

   
   
  "avetimeslice" :
  
   

        A value specified in millisecond units.
        This is the value given in "sp_configure",<timeslice value>.
        Otherwise the milliseconds are converted to milliseconds and
        finally  to tick values.

                ticks = <avetimeslice> * 1000 / cclkrate.

   
   
   "timeslice" :
     _________________________________________________________________
   

        The unit of this variable is in ticks.
        This value is derived from "avetimeslice". If "avetimeslice"
        is less than 1000 milliseconds then timeslice is set to 1 tick.

   
   
  "ctimemax" :
  
   
   
   The unit of this variable is in ticks.

        A task is considered in infinite loop if the consumed ticks
        for a particular task is greater than ctimemax value. This
        is when you get timeslice -201 or -1501 errors.

   
   
  "cschedspins" :
  
   For more information see Q1.3.

        This value alters the behavior of the SQL Server scheduler.
        The scheduler will either run a qualified task or look
        for I/O completion or sleep for a while before it can
        do anything useful.

        The cschedspins value determines how often the scheduler
        will sleep and not how long it will sleep. A low value
        will be suited for a I/O bound SQL Server but a
        high value will be suited for CPU bound SQL Server. Since
        the SQL Server will be used in a mixed mode, this value
        need to be fined tuned.

        Based on practical behavior in the field, a single engine
        SQL Server should have cschedspins set to 1 and a multi-engine
        server should have set to 2000.

   
   
   Now that we've defined the units of these variables what happens when
   we change cclkrate ?
   
   Assume we have a cclkrate=100000.
   
   A clock interrupt will occur every (100000/1000000) 1/10th
   milliseconds. Assuming a task started with 1 tick which can go upto
   "ctimemax=1500" ticks can potentially take 1/10us * (1500 + 1) ticks
   which will be 150 milliseconds or approx. .15 milliseconds per task.
   
   Now changing the cclkrate to 75000
   
   A clock interrupt will occur every (75000/1000000) 1/7th milliseconds.
   Assuming a task started with 1 tick which can go upto ctimemax=1500
   ticks can potentially take 1/7us * (1500 + 1) ticks which will be 112
   milliseconds or approx. .11 milliseconds per task.
   
   Decreasing the cclkrate value will decrease the time spent on each
   task. If the task couldnot voluntarily yield within the time, the
   scheduler will kill the task.
   
   UNDER NO CIRCUMSTANCES the cclkrate value should be changed. The
   default ctimemax value should be set to 1500. This is an empirical
   value and this can be changed under special circumstances and strictly
   under the guidance of DSE.
     _________________________________________________________________
   
   
   
   cfgdbname:

                Name of the master device is saved here. This is 64
                bytes in length.

   
   
   cfgpss:

                This is a derived value from cusrconnections + 3.
                See cusrconnections above.

   
   
   cfgxdes:

                This value defines the number of transactions that
                can be done by a task at a given instance.
                Changing this value to be more than 32 will have no
                effect on the server.
cfgsdes:
                This value defines the number of open tables per
                task. This will be typically for a query. This
                will be the number of tables specified in a query
                including subqueries.

                Sybase Advises not to change this value. There
                will be significant change in the size of per user
                resource in SQL Server.

   
   
   cfgbuf:

                This is a derived variable based on the total
                memory configured and subtracting different resource
                sizes for Databases, Objects, Locks and other
                Kernel memories.

   
   
   cfgdes:
   
   This is same as cdesnum. Other values will have no effect on it.
   
   cfgprocedure:
   
   This is a derived value. Based on cpcacheprcnt variable.
   
   cfglocks:
   
   This is same as clocknum. Other values will have no effect on it.
   
   cfgcprot:

        This is variable that defines the number of cache protectors per
        task. This is used internally by the SQL Server.

        Sybase advise not to modify this value as a default of 15 will
        be more than sufficient.

   
   
   cnproc:

        This is a derived value based on cusrconnections + <extra> for
        Sybase internal tasks that are both visible and non-visible.

   
   
   cnmemmap:

        This is an internal variable that will keep track of SQL Server
        memory.

        Modifying this value will not have any effect.

   
   
   cnmbox:

        Number of mail box structures that need to be allocated.
        More used in VMS environment than UNIX environment.

   
   
   cnmsg:
   
   Used in tandem with cnmbox.
   
   cnmsgmax:
   
   Maximum number of messages that can be passed between mailboxes.
   
   cnblkio:

        Number of disk I/O request (async and direct) that can be
        processed at a given instance. This is a global value for all
        the engines and not per engine value.

        This value is directly depended on the number of I/O request
        that can be processed by the Operating System. It varies
        depending on the Operating System.

   
   
   cnblkmax:

        Maximum number of I/O request that can be processed at any given
        time.

        Normally cnblkio,cnblkmax and cnmaxaio_server should be the same.

   
   
   cnmaxaio_engine:

        Maximum number of I/O request that can be processed by one engine.
        Since engines are Operating System Process, if there is any limit
        imposed by the Operating System on a per process basis then
        this value should be set. Otherwise it is a noop.

   
   
   cnmaxaio_server:

        This is the total number of I/O request the SQL Server can do.
        This value s directly depended on the number of I/O request
        that can be processed by the Operating System. It varies
        depending on the Operating System.

   
   
   csiocnt:
   
   not used.
   
   cnbytio:

        Similar to disk I/O request, this is for network I/O request.
        This includes disk/tape dumps also. This value is for
        the whole SQL Server including other engines.

   
   
   cnbytmax:
   
   Maximum number of network I/O request including disk/tape dumps.
   
   cnalarm:

        Maximum number of alarms including the alarms used by
        the system. This is typically used when users do "waitfor delay"
        commands.

   
   
   cfgmastmirror:
   
   Mirror device name for the master device.
   
   cfgmastmirror_stat:

        Status of mirror devices for the master device  like  serial/dynamic
        mirroring etc.

   
   
   cindextrips:

        This value determines the aging of a index buffer before it
        is removed from the cache.

   
   
   coamtrips:

        This value determines the aging of a OAM buffer before it
        is removed from the cache.

   
   
   cpreallocext:

        This value determines the number of extents that will be
        allocated while doing BCP.

   
   
   cbufwashsize:

        This value determines when to flush buffers in the cache
        that are modified.

                    Q1.9: HOW DO I CORRECT _TIMESLICE -201_
                                       
   
     _________________________________________________________________
   
  Why Increase It?
  
   Basically, it will allow for a task to be scheduled onto the CPU in a
   longer time. Each task on the system is scheduled onto the CPU for a
   fixed period of time, called the timeslice, during which it does some
   work, which is resumed when its next turn comes around.
   
   The process has up until the value of _ctimemax_ (a config block
   variable) to finish its task. As the task is working away, the
   scheduler counts down ctimemax units. When it gets to the value of
   _ctimemax_ - 1, if it gets _stuck_ and for some reason cannot be taken
   off the CPU, then a timeslice error gets generated and the process
   gets infected.
   
   On the other hand, SQL Server will allow a Server process to run as
   long as it needs to. It will not swap the process out for another
   process to run. The process will decide when it is "done" with the
   Server CPU. If, however, a process goes on and on and never
   relinquishes the Server CPU, then Server will timeslice the process.
   
  Potential Fix
    1. Shutdown the SQL Server
    2. %buildmaster -d_your_device_ -yctimemax=2000
    3. Restart your SQL Server. If the problem persists contact Sybase
       Technical Support notifying them what you have done already.
       
   
     _________________________________________________________________
Q1.10: What is a SQL Server?

----------------------------------------------------------------------------

Overview

Before Sybase System 10 (as they call it) we had Sybase 4.x. Sybase System
10 has some significant improvements over Sybase 4.x product line. Namely:

   * the ability to allocate more memory to the dataserver without degrading
     its performance.
   * the ability to have more than one database engine to take advantage of
     multi-processor cpu machines.
   * a minimally intrusive process to perform database and transaction
     dumps.

Background and More Terminology

A SQL Server is simply a Unix process. It is also known as the database
engine. It has multiple threads to handle asynchronous I/O and other tasks.
The number of threads spawned is the number of engines (more on this in a
second) times five. This is the current implementation of Sybase System 10,
10.0.1 and 10.0.2 on IRIX 5.3.

Each SQL dataserver allocates the following resources from a host machine:

   * memory and
   * raw partition space.

Each SQL dataserver can have up to 255 databases. In most implementations
the number of databases is limited to what seems reasonable based on the
load on the SQL dataserver. That is, it would be impractical to house all of
a large company's databases under one SQL dataserver because the SQL
dataserver (a Unix process) will become overloaded.

That's where the DBA's experience comes in with interrogation of the user
community to determine how much activity is going to result on a given
database or databases and from that we determine whether to create a new SQL
Server or to house the new database under an existing SQL Server. We do make
mistakes (and businesses grow) and have to move databases from one SQL
Server to another. And at times SQL Servers need to move from one CPU server
to another.

With Sybase System 10, each SQL Server can be configured to have more than
one engine (each engine is again a Unix process). There's one primary engine
that is the master engine and the rest of the engines are subordinates. They
are assigned tasks by the master.

Interprocess communication among all these engines is accomplished with
shared memory.

     Some times when a DBA issues a Unix kill command to extinguish a
     maverick SQL Server, the subordinate engines are forgotten. This
     leaves the shared memory allocated and eventually we may get in to
     situations where swapping occurs because this memory is locked. To
     find engines that belong to no master SQL Server, simple look for
     engines owned by /etc/init (process id 1). These engines can be
     killed -- this is just FYI and is a DBA duty.

Before presenting an example of a SQL Server, some other topics should be
covered.

Connections

A SQL Server has connections to it. A connection can be viewed as a user
login but it's not necessarily so. That is, a client (a user) can spark up
multiple instances of their application and each client establishes its own
connection to the SQL dataserver. Some clients may require two or more per
invocation. So typically DBA's are only concerned with the number of
connections because the number of users typically does not provide
sufficient information for us to do our job.

     Connections take up SQL Server resources, namely memory, leaving
     less memory for the SQL Servers' available cache.

SQL Server Buffer Cache

In Sybase 4.0.1 there was a limit to the amount of memory that could be
allocated to a SQL Server. It was around 80MB, with 40MB being the typical
max. This was due to internal implementations of Sybase's data structures.

With Sybase System 10 there really is no limit. For instance, we have a SQL
Server cranked up to 300MB.

The memory in a SQL Server is primarily used to cache data pages from disk.
Consider that the SQL Server is a light weight Operating System: handling
user (connections), allocating memory to users, keeping track of which data
pages need to be flushed to disk and the sort. Very sophisticated and
complex. Obviously if a data page is found in memory it's much faster to
retrieve than going out to disk.

Each connection takes away a little bit from the available memory that is
used to cache disk pages. Upon startup, the SQL Server pre-allocates the
memory that is needed for each connection so it's not prudent to configure
500 connections when only 300 are needed. We'd waste 200 connections and the
memory associated with that. On the other hand, it is also imprudent to
under configure the number of connections; users have a way of soaking up a
resource (like a SQL Server) and if users have all the connections a DBA
cannot get into the server to allocate more connections.

One of the neat things about a SQL Server is that it reaches (just like a
Unix process) a working set. That is, upon startup it'll do a lot of
physical I/O's to seed its cache, to get lookup information for typical
transactions and the like. So initially, the first users have heavy hits
because their requests have to be performed as a physical I/O. Subsequent
transactions have less physical I/O and more logical I/O's. Logical I/O is
an I/O that is satisfied in the SQL Servers' buffer cache. Obviously, this
is the preferred condition.

DSS vs OLTP

We throw around terms like everyone is supposed to know this high tech
lingo. The problem is that they are two different animals that require a SQL
Server to be tuned accordingly for each.

Well, here's the low down.

DSS
     Decision Support System
OLTP
     Online Transaction Processing

What do these mean? OLTP applications are those that have very short orders
of work for each connection: fetch this row and with the results of it
update one or two other rows. Basically, small number of rows affected per
transaction in rapid sucession, with no significant wait times between
operations in a transaction.

DSS is the lumbering elephant in the database world (unless you do some
tricks... out of this scope). DSS requires a user to comb through gobs of
data to aggregate some values. So the transactions typically involve
thousands of rows. Big difference than OLTP.

We never want to have DSS and OLTP on the same SQL Server because the nature
of OLTP is to grab things quickly but the nature of DSS is to stick around
for a long time reading tons of information and summarizing the results.

What a DSS application does is flush out the SQL Server's data page cache
because of the tremendous amount of I/O's. This is obviously very bad for
OTLP applications because the small transactions are now hurt by this
trauma. When it was only OLTP a great percentage of I/O was logical
(satisfied in the cache); now transactions must perform physical I/O.

That's why it's important in Sybase not to mix DSS and OLTP, at least until
System 11 arrives.

     Sybase System 11 release will allow for the mixing of OLTP and DSS
     by allowing the DBA to partition (and name) the SQL Server's
     buffer cache and assign it to different databases and/or objects.
     The idea is to allow DSS to only affect their pool of memory and
     thus allowing OLTP to maintain its working set of memory.

Asynchronous I/O

Why async I/O? The idea is in a typical online transaction processing (OLTP)
application you have many connections (over 200 connections) and short
transactions: get this row, update that row. These transactions are
typically spread across different tables of the databases. The SQL Server
can then perform each one of these asynchronously without having to wait for
others to finish. Hence the importance of having async I/O fixed on our
platform.

Engines

Sybase System 10 can have more than one engine (as stated above). Sybase has
trace flags to pin the engines to a given CPU processor but we typically
don't do this. It appears that the master engine goes to processor 0 and
subsequent subordinates to the next processor.

Currently, Sybase does not scale linearly. That is, five engines doesn't
make Sybase perform five times as fast however we do max out with four
engines. After that, performs starts to degrade. This is supposed to be
fixed with Sybase System 11.

Putting Everything Together

As previously mentioned, a SQL Server is a collection of databases with
connections (that are the users) to apply and retrieve information to and
from these containers of information (databases).

The SQL Server is built and its master device is typically built over a
medium sized (50MB) raw partition. The tempdb is built over a cooked
(regular - as opposed to a raw device) file system to realize any
performance gains by buffered writes. The databases themselves are built
over the raw logical devices to ensure their integrity.

Physical and Logical Devices

Sybase likes to live in its own little world. This shields the DBA from the
outside world known as Unix (or VMS). However, it needs to have a conduit to
the outside world and this is accomplished via devices.

All physical devices are mapped to logical devices. That is, given a
physical device (such as /lv1/dumps/tempdb_01.efs or /dev/rdsk/dks1ds0) it
is mapped by the DBA to a logical device. Depending on the type of the
device, it is allocated, by the DBA, to the appropriate place (vague
enough?).

Okay, let's try and clear this up...

Dump Device

The DBA may decide to create a device for dumping the database nightly. The
DBA needs to create a dump device.

We'll call that logically in the database datadump_for_my_db but we'll map
it to the physical world as /lv1/dumps/in_your_eye.dat So the DBA will write
a script that connects to the SQL Server and issues a command like this:

     dump database my_stinking_db to datadump_for_my_db
     go

and the backupserver (out of this scope) takes the contents of
my_stinking_db and writes it out to the disk file /lv1/dumps/in_your_eye.dat

That's a dump device. The thing is that it's not preallocated. This special
device is simply a window to the operating system.

Data and Log Devices

Ah, now we are getting into the world of pre-allocation. Databases are built
over raw partitions. The reason for this is because Sybase needs to be
guaranteed that all its writes complete successfully. Otherwise, if it
posted to a file system buffer (as in a cooked file system) and the machine
crashed, as far as Sybase is concerned the write was committed. It was not,
however, and integrity of the database was lost. That is why Sybase needs
raw partitions. But back to the matter at hand...

When building a new SQL Server, the DBA determines how much space they'll
need for all the databases that will be housed in this SQL Server.

Each production database is composed of data and log.

The data is where the actual information resides. The log are where the
changes are kept. That is, every row that is updated/deleted/inserted gets
placed into the log portion then applied to the data portion of the
database.

     That's why DBA strives to place the raw devices for logs on
     separate disks because everything has to single thread through the
     log.

A transaction is a collection of SQL statements (insert/delete/update) that
are grouped together to form a single unit of work. Typically they map very
closely to the business.

I'll quote the Sybase SQL Server System Administration guide on the role of
the log:

     The transaction log is a write-ahead log. When a user issues a
     statement that would modify the database, SQL Server automatically
     writes the changes to the log. After all changes for a statement
     have been recorded in the log, they are written to an in-cache
     copy of the data page. The data page remains in cache until the
     memory is needed for another database page. At that time, it is
     written to disk. If any statement in a transaction fails to
     complete, SQL Server reverses all changes made by the transaction.
     SQL Server writes an "end transaction" record to the log at the
     end of each transaction, recording the status (success or failure)
     of the transaction

As such, the log will grow as user connections affect changes to the
database. The need arises to then clear out the log of all transactions that
have been flushed to disk. This is performed by issuing the following
command:

     dump transaction my_stinking_db to logdump_for_my_db
     go

The SQL Server will write to the dumpdevice all transactions that have been
committed to disk and will delete the entries from its copy, thus freeing up
space in the log. Dumping of the transaction logs is accomplished via cron.
We schedule the heavily hit databases every 20 minutes during peak times.

     A single user can fill up the log by having begin transaction with
     no corresponding commit/rollback transaction. This is because all
     their changes are being applied to the log as an open-ended
     transaction, which is never closed. This open-ended transaction
     cannot be flushed from the log, and therefore grows until it
     occupies all of the free space on the log device.

And the way we dump it is with a dump device. :-)

An Example

If the DBA has four databases to plop on this SQL Server and they need a
total of 800MB of data and 100MB of log (because that's what really matters
to us), then they'd probably do something like this:

  1. allocate sufficient raw devices to cover the data portion of all the
     databases
  2. allocate sufficient raw devices to cover the log portion of all the
     databases
  3. start allocating the databases to the devices.

For example, assuming the following database requirements:

                                  Database
                                Requirements

                                DB Data  Log

                                a  300   30

                                b  400   40

                                c  100   10

and the following devices:
                                   Devices

                      Logical          Physical      Size

                   dks3d1s2_data  /dev/rdsk/dks3d1s2 500

                   dks4d1s2_data  /dev/rdsk/dks4d1s2 500

                   dks5d1s0_log   /dev/rdsk/dks5d1s0 200

then the DBA may elect to create the databases as follows:

     create database a on dks3d1s2_data = 300 log on dks5d1s0_log = 30
     create database b on dks4d1s2_data = 400 log on dks5d1s0_log = 40
     create database c on dks3d1s2_data = 50, dks4d1s2_data = 50 log on
     dks5d1s0_log = 10

Some of the devices will have extra space available because out database
allocations didn't use up all the space. That's fine because it can be used
for future growth. While the Sybase SQL Server is running, no other Sybase
SQL Server can re-allocate these physical devices.

TempDB

TempDB is simply a scratch pad database. It gets recreated when a SQL Server
is rebooted. The information held in this database is temporary data. A
query may build a temporary table to assist it; the Sybase optimizer may
decide to create a temporary table to assist itself.

Since this is an area of constant activity we create this database over a
cooked file system which has historically proven to have better performance
than raw - due to the buffered writes provided by the Operating System.

Port Numbers

When creating a new SQL Server, we allocate a port to it (currently, DBA
reserves ports 1500 through 1899 for its use). We then map a host name to
the different ports: hera, fddi-hera and so forth. We can actually have more
than one port number for a SQL Server but we typically don't do this.
----------------------------------------------------------------------------

                Q1.11: CERTIFIED SYBASE PROFESSIONAL - _CSPDBA_
                                       
   
     _________________________________________________________________
   
   Here's a list of commonly asked questions about becoming a _CSPDBA_:
   
  What are the exams like?
  
   The exams are administered by Drake Testing and Technologies and are
   given at Drake authorized testing centers. The Environment and
   Operations exams each take an hour, and the Fundamentals exam takes an
   hour and a half. Each exam contains between 60 and 90 questions. Many
   of the questions are _multiple choice_, some are _select all that
   apply_ and some are _fill in the blank_. Depending on the exam, a
   score of 67% - 72% is required to pass. The exams are challenging, but
   fair.
   
   Before taking an exam, Drake provides you with a short _tutorial exam_
   that you can take to get an idea of the format of the exam questions.
   
   You receive a report each time you complete an exam. The report shows
   the passing score, your total score, and your score in various
   sections of the exam. (You aren't told which specific questions you
   answered correctly or incorrectly.)
   
  How do I register for the exams?
  
   Call 1-800-8SYBASE, select option 2, then option 2 again. You will be
   connected to a Drake representative. Currently each exam costs $150.
   
  What happens once I pass?
  
   You will receive a certificate in the mail about a month after you've
   passed all the exams. When you receive your certificate, you'll also
   have the opportunity to enter into a licensing agreement that will
   allow you to use the Certified Sybase Professional service mark (logo)
   in your office and on your business cards. If your company is an Open
   Solutions partner, your certification is acknowledged by the
   appearance of the CSP logo with your company's name in the Open
   Solutions Directory. If you have a CompuServe account, you can obtain
   access to a private section of _Sybase OpenLine_, a technical forum on
   CompuServe.
   
  What topics are covered?
     * Sybase SQL Server Fundamentals Exam Topics:
          + Sybase client/server architecture
          + SQL Server objects
          + Use of tables
          + Use of indexes
          + Use of columns
          + Use of defaults
          + Use of triggers
          + Use of keys
          + Use of check constraints
          + Use of datatypes
          + Use of cursors
          + System datatypes
          + Views
          + Data integrity
          + Rules
          + Select statements
          + Transaction management
          + Locking
          + Stored procedures
          + Local and global variables
     * Sybase SQL Server Environment Exam Topics:
          + Configuration and control
          + Starting the SQL Server
          + Accessing remote servers
          + Stopping the SQL Server
          + Using buildmaster
          + Installing the SQL Server
          + Using the standard databases
          + Admin Utilities and Tools
          + System stored procedures
          + Using system tables
          + Load and unload utilities
          + Resources
          + Disk mirroring
          + Creating databases
          + Managing segments
          + Managing transaction logs
          + Managing thresholds
          + Managing audit logs
          + Devices
          + Security
          + Establishing security
          + Roles
          + Managing user accounts
     * Sybase SQL Server Operations Exam Topics:
          + Monitoring
          + Starting the Backup Server
          + Monitoring the errorlog
          + Diagnostics
          + Resolving contention and locking problems
          + Managing application stored procedures
          + Recovery
          + Backup
          + Load
          + Backup strategies
          + Security
          + Establishing security
          + Roles
          + Managing user accounts
          + Admin utilities and tools
          + System stored procedures
          + Using system tables
          + Load and unload utilities
            
   
     _________________________________________________________________

                            Q1.12: RAID AND SYBASE
                                       
   
     _________________________________________________________________
   
   Here's a short summary of what you need to know about Sybase and RAID.
   
   
   The newsgroup comp.arch.storage has a detailed FAQ on RAID, but here
   are a few definitions:
   
  RAID
  
   RAID means several things at once. It provides increased performance
   through disk striping, and/or resistance to hardware failure through
   either mirroring (fast) or parity (slower but cheaper).
   
  RAID 0
  
   RAID 0 is just striping. It allows you to read and write quickly, but
   provides no protection against failure.
   
  RAID 1
  
   RAID 1 is just mirroring. It protects you against failure, and
   generally reads and writes as fast as a normal disk. It uses twice as
   many disks as normal (and sends twice as much data across your SCSI
   bus, but most machines have plenty of extra capacity on their SCSI
   busses.)
   
     _Sybase mirroring always reads from the primary copy, so it does not
     increase read performance. _
     
  RAID 0+1
  
   RAID 0+1 (also called RAID 10) is striping and mirroring together.
   This gives you the highest read and write performance of any of the
   raid options, but uses twice as many disks as normal.
   
  RAID 4/RAID 5
  
   RAID 4 and 5 have disk striping and use 1 extra disk to provide
   _parity_. Various vendors have various optimizations, but this RAID
   level is generally much slower at writes than any other kind of RAID.
   
  RAID 7
  
   RAID 7 is a marketing slogan used by a company which unethically
   advertises on Usenet. I would not advise doing business with them.
   
  Details
  
   Most hardware RAID controllers also provide a battery-backed RAM cache
   for writing. This is very useful, because it allows the disk to claim
   that the write succeeded before it has done anything. If there is a
   power failure, the information will (hopefully) be written to disk
   when the power is restored. The cache is very important because
   database log writes cause the process doing the writes to stop until
   the write is successful. Systems with write caching thus complete
   transactions much more quickly than systems without.
   
   What RAID levels should my data, log, etc be on? Well, the log disk is
   _frequently written_, so it should not be on RAID 4 or 5. If your data
   is _infrequently written_, you could use RAID 4 or 5 for it, because
   you don't mind that writes are slow. If your data is frequently
   written, you should use RAID 0+1 for it. Striping your data is a very
   effective way of avoiding any one disk becoming a hot-spot.
   Traditionally Sybase databases were divided among devices by a human
   attempting to determine where the hot-spots are. Striping does this in
   a straight-forward fashion, and also continues to work if your data
   access patterns change.
   
   Your tempdb is data but it is frequently written, so it should not be
   on RAID 4 or 5.
   
   If your RAID controller does not allow you to create several different
   kinds of RAID volumes on it, than your only hope is to create a huge
   RAID 0+1 set. If your RAID controler does not support RAID 0+1, you
   shouldn't be using it for database work.
     _________________________________________________________________

                  Q1.13: HOW TO SWAP A DB DEVICE WITH ANOTHER
                                       
   
     _________________________________________________________________
   
   Here are some approaches:
    1. Backup the database, drop the databases, drop the devices, and
       rebuild the devices/databases, before a restore. Takes time, can
       be tricky without creation scripts. Make sure the database
       fragments are rebuilt in the right order, or the data/log could
       get hosed.
    2. Do a physical dump (using _dd(1)_, or such utility) of the device,
       and physical restoration of the device on the new device, and hack
       away at the data dictionary. Potentially messy, dangerous, and
       time consuming.
    3. Mirror the device to be moved on the new device, then unmirror the
       primary device, thereby making the _backup_ the primary device.
       Repeating this for all devices until the
       
   old disk is free. Clean, easy, and not in need of backups although
   dumping the master database before and after is highly recommended.
   The third or first are the best approaches by far. The second should
   be avoided by all but the clinically insane/curious/fired employee.
   
     _Backups are a requisite in all cases, just in case_.
     
   
     _________________________________________________________________

                       Q1.14: SERVER NAMING AND RENAMING
                                       
   
     _________________________________________________________________
   
   There are three totally separate places where SQL Server _names_
   reside, causing much confusion.
   
  SQL Server Host Machine _interfaces_ File
  
   A _master_ entry in here for server _TEST_ will provide the network
   information that the server is expected to listen on. The -S parameter
   to the dataserver executable tells the server which entry to look for,
   so in the RUN_TEST file, -STEST will tell the dataserver to look for
   the entry under TEST in the interfaces file and listen on any network
   parameters specified by 'master' entries.

TEST
        master tcp ether hpsrv1 1200
        query tcp ether hpsrv1 1200

     Note that preceding the _master/query_ entries there's a tab.
     
   This is as far as the name _TEST_ is used. Without further
   configuration the server does not know its name is _TEST_, nor do any
   client applications. Typically there will also be _query_ entries
   under _TEST_ in the local _interfaces_ file, and client programs
   running on the same machine as the server will pick this connection
   information up. However, there is nothing to stop the _query_ entry
   being duplicated under another name entirely in the same _interfaces_
   file.

ARTHUR
        query tcp ether hpsrv1 1200

   _isql -STEST_ or _isql -SARTHUR_ will connect to the same server. The
   name is simply a search parameter into the _interfaces_ file.
   
  Client Machine _interfaces_ File
  
   Again, as the server name specified to the client is simply a search
   parameter for Open Client into the _interfaces_ file, SQL.INI or
   WIN.INI the name is largely irrelevant. It is often set to something
   that means something to the users, especially where they might have a
   choice of servers to connect to. Also multiple query entries can be
   set to point to the same server, possibly using different network
   protocols. Eg. if _TEST_ has the following master entries on the host
   machine:

TEST
       master tli spx /dev/nspx/ \xC12082580000000000012110
       master tcp ether hpsrv1 1200

   Then the client can have a meaningful name:

ACCOUNTS_TEST_SERVER
        query tcp ether hpsrv1 1200

   or alternative protocols:

TEST_IP
        query tcp ether hpsrv1 1200
TEST_SPX
        query tli spx /dev/nspx/ \xC12082580000000000012110

  sysservers
  
   This system table holds information about remote SQL Servers that
   local one might want to connect to, and also provides a method of
   naming the local server.
   
   Entries are added using the sp_addserver system procedure - add a
   remote server with this format:

        sp_addserver server_name, null, network_name

   server_name is any name you wish to refer to a remote server by, but
   network_name must be the name of the remote server as referenced in
   the interfaces file local to your local server. It normally makes
   sense to make the server_name the same as the network_name, but you
   can easily do:

        sp_addserver LIVE, null, ACCTS_LIVE

   When you execute for example, exec LIVE.master..sp_helpdb the local
   SQL Server will translate LIVE to ACCTS_LIVE and try and talk to
   ACCTS_LIVE via the ACCTS_LIVE entry in the local interfaces file.
   
   Finally, a variation on the sp_addserver command:

        sp_addserver LOCALSRVNAME, local

   names the local server (after a restart). This is the name the server
   reports in the errorlog at startup, the value returned by
   @@SERVERNAME, and the value placed in Open Client server messages. It
   can be completely different from the names in RUN_SRVNAME or in local
   or remote interfaces - it has _no_ bearing on connectivity matters.
     _________________________________________________________________

             Q1.15: HOW CAN I TELL THE DATETIME MY SERVER STARTED?
                                       
   
     _________________________________________________________________
   
   The normal way would be to look at the errorlog, but this is not
   always convenient or even possible. From a SQL session you find out
   the server startup time to within a few seconds using:

   select  "Server Start Time" = crdate
   from    master..sysdatabases
   where   name = "tempdb"

   
     _________________________________________________________________

                    Q1.16: RAW PARTITIONS OR REGULAR FILES?
                                       
   
     _________________________________________________________________
   
   Hmmm... as always, this answer depends on the vendor's implementation
   on a cooked file system for the SQL Server...
   
Performance Hit (synchronous vs asynchronous)

   If on this platform, the SQL Server performs file system I/O
   synchronously then the SQL Server is blocked on the read/write and
   throughput is decreased tremendously.
   
   The way the SQL Server typically works is that it will issue an I/O
   (read/write) and save the I/O control block and continue to do other
   work (on behalf of other connections). It'll periodically poll the
   workq's (network, I/O) and resume connections when their work has
   completed (I/O completed, network data xmit'd...).
   
Performance Hit (bcopy issue)

   Assuming that the file system I/O is asynchronous (this can be done on
   SGI), a performance hit may be realized when bcopy'ing the data from
   kernel space to user space.
   
   Cooked I/O typically (again, SGI has something called directed I/O
   which allows I/O to go directly to user space) has to go from disk, to
   kernel buffers and from kernel buffers to user space; on a read. The
   extra layer with the kernel buffers is inherently slow. The data is
   moved from kernel buffers to/fro user space using bcopy(). On small
   operations this typically isn't that much of an issue but in a RDBMS
   scenario the bcopy() layer is a significant performance hit because
   it's done so often...
   
Performance Gain!

   It's true, using file systems, at times you can get performance gains
   assuming that the SQL Server on your platform does the I/O
   asynchronously (although there's a caveat on this too... I'll cover
   that later on).
   
   If your machine has sufficient memory and extra CPU capacity, you can
   realize some gains by having writes return immediately because they're
   posted to memory. Reads will gain from the anticipatory fetch
   algorithm employed by most O/S's.
   
   You'll need extra memory to house the kernel buffered data and you'll
   need extra CPU capacity to allow bdflush() to write the dirty data out
   to disk... eventually... but with everything there's a cost: extra
   memory and free CPU cycles.
   
   One argument is that instead of giving the O/S the extra memory (by
   leaving it free) to give it to the SQL Server and let it do its
   caching... but that's a different thread...
   
Data Integrity and Cooked File System

   If the Sybase SQL Server is _not_ certified to be used over a cooked
   file system, because of the nature of the kernel buffering (see the
   section above) you may face database corruption by using cooked file
   system anyway. The SQL Server _thinks_ that it has posted its changes
   out to disk but in reality it has gone only to memory. If the machine
   halts without bdflush() having a chance to flush memory out to disk,
   your database _may_ become corrupted.
   
   Some O/S's allow cooked files to have a _write through_ mode and it
   really depends if the SQL Server has been certified on cooked file
   systems. If it has, it means that when the SQL Server opens a device
   which is on a file system, it fcntl()'s the device to write-through.
   
When to use cooked file system?

   I typically build my tempdb on cooked file system and I don't worry
   about data integrity because tempdb is _rebuilt_ everytime your SQL
   Server is rebooted.
     _________________________________________________________________
-- 
Pablo Sanchez              | Ph # (415) 933.3812        Fax # (415) 933.2821
pablo@sgi.com              | Pg # (800) 930.5635  -or-  pablo_p@pager.sgi.com
===============================================================================
I am accountable for my actions.   http://reality.sgi.com/pablo [ /Sybase_FAQ ]
