CINXE.COM
LKML: john stultz: [PATCH 6/11] Time: i386 Conversion - part 2: Move timer_tsc.c to tsc.c
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html xmlns="http://www.w3.org/1999/xhtml"><head><meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /><title>LKML: john stultz: [PATCH 6/11] Time: i386 Conversion - part 2: Move timer_tsc.c to tsc.c</title><link href="/css/message.css" rel="stylesheet" type="text/css" /><link href="/css/wrap.css" rel="alternate stylesheet" type="text/css" title="wrap" /><link href="/css/nowrap.css" rel="stylesheet" type="text/css" title="nowrap" /><link href="/favicon.ico" rel="shortcut icon" /><script src="/js/simple-calendar.js" type="text/javascript"></script><script src="/js/styleswitcher.js" type="text/javascript"></script><link rel="alternate" type="application/rss+xml" title="lkml.org : last 100 messages" href="/rss.php" /><link rel="alternate" type="application/rss+xml" title="lkml.org : last messages by john stultz" href="/groupie.php?aid=1263" /><!--Matomo--><script> var _paq = window._paq = window._paq || []; /* tracker methods like "setCustomDimension" should be called before "trackPageView" */ _paq.push(["setDoNotTrack", true]); _paq.push(["disableCookies"]); _paq.push(['trackPageView']); _paq.push(['enableLinkTracking']); (function() { var u="//m.lkml.org/"; _paq.push(['setTrackerUrl', u+'matomo.php']); _paq.push(['setSiteId', '1']); var d=document, g=d.createElement('script'), s=d.getElementsByTagName('script')[0]; g.async=true; g.src=u+'matomo.js'; s.parentNode.insertBefore(g,s); })(); </script><!--End Matomo Code--></head><body onload="es.jasper.simpleCalendar.init();" itemscope="itemscope" itemtype="http://schema.org/BlogPosting"><table border="0" cellpadding="0" cellspacing="0"><tr><td width="180" align="center"><a href="/"><img style="border:0;width:135px;height:32px" src="/images/toprowlk.gif" alt="lkml.org" /></a></td><td width="32">聽</td><td class="nb"><div><a class="nb" href="/lkml"> [lkml]</a> 聽 <a class="nb" href="/lkml/2005"> [2005]</a> 聽 <a class="nb" href="/lkml/2005/12"> [Dec]</a> 聽 <a class="nb" href="/lkml/2005/12/15"> [15]</a> 聽 <a class="nb" href="/lkml/last100"> [last100]</a> 聽 <a href="/rss.php"><img src="/images/rss-or.gif" border="0" alt="RSS Feed" /></a></div><div>Views: <a href="#" class="nowrap" onclick="setActiveStyleSheet('wrap');return false;">[wrap]</a><a href="#" class="wrap" onclick="setActiveStyleSheet('nowrap');return false;">[no wrap]</a> 聽 <a class="nb" href="/lkml/mheaders/2005/12/15/410" onclick="this.href='/lkml/headers'+'/2005/12/15/410';">[headers]</a>聽 <a href="/lkml/bounce/2005/12/15/410">[forward]</a>聽 </div></td><td width="32">聽</td></tr><tr><td valign="top"><div class="es-jasper-simpleCalendar" baseurl="/lkml/"></div><div class="threadlist">Messages in this thread</div><ul class="threadlist"><li class="root"><a href="/lkml/2005/12/15/405">First message in thread</a></li><li><a href="/lkml/2005/12/15/405">john stultz</a><ul><li><a href="/lkml/2005/12/15/404">john stultz</a></li><li><a href="/lkml/2005/12/15/406">john stultz</a></li><li><a href="/lkml/2005/12/15/407">john stultz</a><ul><li><a href="/lkml/2006/1/3/509">Andrew Morton</a><ul><li><a href="/lkml/2006/1/4/72">john stultz</a><ul><li><a href="/lkml/2006/1/4/76">Andrew Morton</a></li></ul></li></ul></li></ul></li><li><a href="/lkml/2005/12/15/408">john stultz</a></li><li><a href="/lkml/2005/12/15/409">john stultz</a></li><li class="origin"><a href="">john stultz</a></li><li><a href="/lkml/2005/12/15/411">john stultz</a></li><li><a href="/lkml/2005/12/15/412">john stultz</a></li><li><a href="/lkml/2005/12/15/413">john stultz</a></li><li><a href="/lkml/2005/12/15/414">john stultz</a></li><li><a href="/lkml/2005/12/15/415">john stultz</a></li></ul></li></ul><div class="threadlist">Patch in this message</div><ul class="threadlist"><li><a href="/lkml/diff/2005/12/15/410/1">Get diff 1</a></li></ul></td><td width="32" rowspan="2" class="c" valign="top"><img src="/images/icornerl.gif" width="32" height="32" alt="/" /></td><td class="c" rowspan="2" valign="top" style="padding-top: 1em"><table><tr><td><table><tr><td class="lp">Date</td><td class="rp" itemprop="datePublished">Thu, 15 Dec 2005 18:07:39 -0700</td></tr><tr><td class="lp">From</td><td class="rp" itemprop="author">john stultz <></td></tr><tr><td class="lp">Subject</td><td class="rp" itemprop="name">[PATCH 6/11] Time: i386 Conversion - part 2: Move timer_tsc.c to tsc.c</td></tr></table></td><td></td></tr></table><pre itemprop="articleBody">Andrew, All,<br /> The conversion of i386 to use the generic timeofday subsystem <br />has been split into 6 parts. This patch, the second of six, is a <br />cleanup patch for the i386 arch in preparation of moving the the <br />generic timeofday infrastructure. It moves some code from timer_tsc.c <br />to a new tsc.c file.<br /><br />It applies on top of my timeofday-arch-i386-part1 patch. This patch is <br />part the timeofday-arch-i386 patchset, so without the following parts <br />it is not expected to compile.<br /><br />Andrew, please consider for inclusion into your tree.<br /><br />thanks<br />-john<br /><br />Signed-off-by: John Stultz <johnstul@us.ibm.com><br /><br /> arch/i386/kernel/Makefile | 2 <br /> arch/i386/kernel/timers/common.c | 84 ---------<br /> arch/i386/kernel/timers/timer_tsc.c | 212 ------------------------<br /> arch/i386/kernel/tsc.c | 312 ++++++++++++++++++++++++++++++++++++<br /> include/asm-i386/timex.h | 34 ---<br /> include/asm-i386/tsc.h | 44 +++++<br /> 6 files changed, 358 insertions(+), 330 deletions(-)<br /><br />linux-2.6.15-rc5_timeofday-arch-i386-part2_B14.patch<br />============================================<br />diff --git a/arch/i386/kernel/Makefile b/arch/i386/kernel/Makefile<br />index 7bc053f..4c4e1e5 100644<br />--- a/arch/i386/kernel/Makefile<br />+++ b/arch/i386/kernel/Makefile<br />@@ -7,7 +7,7 @@ extra-y := head.o init_task.o vmlinux.ld<br /> obj-y := process.o semaphore.o signal.o entry.o traps.o irq.o vm86.o \<br /> ptrace.o time.o ioport.o ldt.o setup.o i8259.o sys_i386.o \<br /> pci-dma.o i386_ksyms.o i387.o dmi_scan.o bootflag.o \<br />- doublefault.o quirks.o i8237.o i8253.o<br />+ doublefault.o quirks.o i8237.o i8253.o tsc.o<br /> <br /> obj-y += cpu/<br /> obj-y += timers/<br />diff --git a/arch/i386/kernel/timers/common.c b/arch/i386/kernel/timers/common.c<br />index 8163fe0..535f4d8 100644<br />--- a/arch/i386/kernel/timers/common.c<br />+++ b/arch/i386/kernel/timers/common.c<br />@@ -14,66 +14,6 @@<br /> <br /> #include "mach_timer.h"<br /> <br />-/* ------ Calibrate the TSC -------<br />- * Return 2^32 * (1 / (TSC clocks per usec)) for do_fast_gettimeoffset().<br />- * Too much 64-bit arithmetic here to do this cleanly in C, and for<br />- * accuracy's sake we want to keep the overhead on the CTC speaker (channel 2)<br />- * output busy loop as low as possible. We avoid reading the CTC registers<br />- * directly because of the awkward 8-bit access mechanism of the 82C54<br />- * device.<br />- */<br />-<br />-#define CALIBRATE_TIME (5 * 1000020/HZ)<br />-<br />-unsigned long calibrate_tsc(void)<br />-{<br />- mach_prepare_counter();<br />-<br />- {<br />- unsigned long startlow, starthigh;<br />- unsigned long endlow, endhigh;<br />- unsigned long count;<br />-<br />- rdtsc(startlow,starthigh);<br />- mach_countup(&count);<br />- rdtsc(endlow,endhigh);<br />-<br />-<br />- /* Error: ECTCNEVERSET */<br />- if (count <= 1)<br />- goto bad_ctc;<br />-<br />- /* 64-bit subtract - gcc just messes up with long longs */<br />- __asm__("subl %2,%0\n\t"<br />- "sbbl %3,%1"<br />- :"=a" (endlow), "=d" (endhigh)<br />- :"g" (startlow), "g" (starthigh),<br />- "0" (endlow), "1" (endhigh));<br />-<br />- /* Error: ECPUTOOFAST */<br />- if (endhigh)<br />- goto bad_ctc;<br />-<br />- /* Error: ECPUTOOSLOW */<br />- if (endlow <= CALIBRATE_TIME)<br />- goto bad_ctc;<br />-<br />- __asm__("divl %2"<br />- :"=a" (endlow), "=d" (endhigh)<br />- :"r" (endlow), "0" (0), "1" (CALIBRATE_TIME));<br />-<br />- return endlow;<br />- }<br />-<br />- /*<br />- * The CTC wasn't reliable: we got a hit on the very first read,<br />- * or the CPU was so fast/slow that the quotient wouldn't fit in<br />- * 32 bits..<br />- */<br />-bad_ctc:<br />- return 0;<br />-}<br />-<br /> #ifdef CONFIG_HPET_TIMER<br /> /* ------ Calibrate the TSC using HPET -------<br /> * Return 2^32 * (1 / (TSC clocks per usec)) for getting the CPU freq.<br />@@ -146,27 +86,3 @@ unsigned long read_timer_tsc(void)<br /> rdtscl(retval);<br /> return retval;<br /> }<br />-<br />-<br />-/* calculate cpu_khz */<br />-void init_cpu_khz(void)<br />-{<br />- if (cpu_has_tsc) {<br />- unsigned long tsc_quotient = calibrate_tsc();<br />- if (tsc_quotient) {<br />- /* report CPU clock rate in Hz.<br />- * The formula is (10^6 * 2^32) / (2^32 * 1 / (clocks/us)) =<br />- * clock/second. Our precision is about 100 ppm.<br />- */<br />- { unsigned long eax=0, edx=1000;<br />- __asm__("divl %2"<br />- :"=a" (cpu_khz), "=d" (edx)<br />- :"r" (tsc_quotient),<br />- "0" (eax), "1" (edx));<br />- printk("Detected %u.%03u MHz processor.\n",<br />- cpu_khz / 1000, cpu_khz % 1000);<br />- }<br />- }<br />- }<br />-}<br />-<br />diff --git a/arch/i386/kernel/timers/timer_tsc.c b/arch/i386/kernel/timers/timer_tsc.c<br />index d395e3b..93ec4c9 100644<br />--- a/arch/i386/kernel/timers/timer_tsc.c<br />+++ b/arch/i386/kernel/timers/timer_tsc.c<br />@@ -32,10 +32,6 @@ static unsigned long hpet_last;<br /> static struct timer_opts timer_tsc;<br /> #endif<br /> <br />-static inline void cpufreq_delayed_get(void);<br />-<br />-int tsc_disable __devinitdata = 0;<br />-<br /> static int use_tsc;<br /> /* Number of usecs that the last interrupt was delayed */<br /> static int delay_at_last_interrupt;<br />@@ -45,39 +41,6 @@ static unsigned long last_tsc_high; /* m<br /> static unsigned long long monotonic_base;<br /> static seqlock_t monotonic_lock = SEQLOCK_UNLOCKED;<br /> <br />-/* convert from cycles(64bits) => nanoseconds (64bits)<br />- * basic equation:<br />- * ns = cycles / (freq / ns_per_sec)<br />- * ns = cycles * (ns_per_sec / freq)<br />- * ns = cycles * (10^9 / (cpu_khz * 10^3))<br />- * ns = cycles * (10^6 / cpu_khz)<br />- *<br />- * Then we use scaling math (suggested by george@mvista.com) to get:<br />- * ns = cycles * (10^6 * SC / cpu_khz) / SC<br />- * ns = cycles * cyc2ns_scale / SC<br />- *<br />- * And since SC is a constant power of two, we can convert the div<br />- * into a shift.<br />- *<br />- * We can use khz divisor instead of mhz to keep a better percision, since<br />- * cyc2ns_scale is limited to 10^6 * 2^10, which fits in 32 bits.<br />- * (mathieu.desnoyers@polymtl.ca)<br />- *<br />- * -johnstul@us.ibm.com "math is hard, lets go shopping!"<br />- */<br />-static unsigned long cyc2ns_scale; <br />-#define CYC2NS_SCALE_FACTOR 10 /* 2^10, carefully chosen */<br />-<br />-static inline void set_cyc2ns_scale(unsigned long cpu_khz)<br />-{<br />- cyc2ns_scale = (1000000 << CYC2NS_SCALE_FACTOR)/cpu_khz;<br />-}<br />-<br />-static inline unsigned long long cycles_2_ns(unsigned long long cyc)<br />-{<br />- return (cyc * cyc2ns_scale) >> CYC2NS_SCALE_FACTOR;<br />-}<br />-<br /> static int count2; /* counter for mark_offset_tsc() */<br /> <br /> /* Cached *multiplier* to convert TSC counts to microseconds.<br />@@ -135,29 +98,6 @@ static unsigned long long monotonic_cloc<br /> return base + cycles_2_ns(this_offset - last_offset);<br /> }<br /> <br />-/*<br />- * Scheduler clock - returns current time in nanosec units.<br />- */<br />-unsigned long long sched_clock(void)<br />-{<br />- unsigned long long this_offset;<br />-<br />- /*<br />- * In the NUMA case we dont use the TSC as they are not<br />- * synchronized across all CPUs.<br />- */<br />-#ifndef CONFIG_NUMA<br />- if (!use_tsc)<br />-#endif<br />- /* no locking but a rare wrong value is not a big deal */<br />- return jiffies_64 * (1000000000 / HZ);<br />-<br />- /* Read the Time Stamp Counter */<br />- rdtscll(this_offset);<br />-<br />- /* return the value in ns */<br />- return cycles_2_ns(this_offset);<br />-}<br /> <br /> static void delay_tsc(unsigned long loops)<br /> {<br />@@ -222,127 +162,6 @@ static void mark_offset_tsc_hpet(void)<br /> #endif<br /> <br /> <br />-#ifdef CONFIG_CPU_FREQ<br />-#include <linux/workqueue.h><br />-<br />-static unsigned int cpufreq_delayed_issched = 0;<br />-static unsigned int cpufreq_init = 0;<br />-static struct work_struct cpufreq_delayed_get_work;<br />-<br />-static void handle_cpufreq_delayed_get(void *v)<br />-{<br />- unsigned int cpu;<br />- for_each_online_cpu(cpu) {<br />- cpufreq_get(cpu);<br />- }<br />- cpufreq_delayed_issched = 0;<br />-}<br />-<br />-/* if we notice lost ticks, schedule a call to cpufreq_get() as it tries<br />- * to verify the CPU frequency the timing core thinks the CPU is running<br />- * at is still correct.<br />- */<br />-static inline void cpufreq_delayed_get(void) <br />-{<br />- if (cpufreq_init && !cpufreq_delayed_issched) {<br />- cpufreq_delayed_issched = 1;<br />- printk(KERN_DEBUG "Losing some ticks... checking if CPU frequency changed.\n");<br />- schedule_work(&cpufreq_delayed_get_work);<br />- }<br />-}<br />-<br />-/* If the CPU frequency is scaled, TSC-based delays will need a different<br />- * loops_per_jiffy value to function properly.<br />- */<br />-<br />-static unsigned int ref_freq = 0;<br />-static unsigned long loops_per_jiffy_ref = 0;<br />-<br />-#ifndef CONFIG_SMP<br />-static unsigned long fast_gettimeoffset_ref = 0;<br />-static unsigned int cpu_khz_ref = 0;<br />-#endif<br />-<br />-static int<br />-time_cpufreq_notifier(struct notifier_block *nb, unsigned long val,<br />- void *data)<br />-{<br />- struct cpufreq_freqs *freq = data;<br />-<br />- if (val != CPUFREQ_RESUMECHANGE)<br />- write_seqlock_irq(&xtime_lock);<br />- if (!ref_freq) {<br />- ref_freq = freq->old;<br />- loops_per_jiffy_ref = cpu_data[freq->cpu].loops_per_jiffy;<br />-#ifndef CONFIG_SMP<br />- fast_gettimeoffset_ref = fast_gettimeoffset_quotient;<br />- cpu_khz_ref = cpu_khz;<br />-#endif<br />- }<br />-<br />- if ((val == CPUFREQ_PRECHANGE && freq->old < freq->new) ||<br />- (val == CPUFREQ_POSTCHANGE && freq->old > freq->new) ||<br />- (val == CPUFREQ_RESUMECHANGE)) {<br />- if (!(freq->flags & CPUFREQ_CONST_LOOPS))<br />- cpu_data[freq->cpu].loops_per_jiffy = cpufreq_scale(loops_per_jiffy_ref, ref_freq, freq->new);<br />-#ifndef CONFIG_SMP<br />- if (cpu_khz)<br />- cpu_khz = cpufreq_scale(cpu_khz_ref, ref_freq, freq->new);<br />- if (use_tsc) {<br />- if (!(freq->flags & CPUFREQ_CONST_LOOPS)) {<br />- fast_gettimeoffset_quotient = cpufreq_scale(fast_gettimeoffset_ref, freq->new, ref_freq);<br />- set_cyc2ns_scale(cpu_khz);<br />- }<br />- }<br />-#endif<br />- }<br />-<br />- if (val != CPUFREQ_RESUMECHANGE)<br />- write_sequnlock_irq(&xtime_lock);<br />-<br />- return 0;<br />-}<br />-<br />-static struct notifier_block time_cpufreq_notifier_block = {<br />- .notifier_call = time_cpufreq_notifier<br />-};<br />-<br />-<br />-static int __init cpufreq_tsc(void)<br />-{<br />- int ret;<br />- INIT_WORK(&cpufreq_delayed_get_work, handle_cpufreq_delayed_get, NULL);<br />- ret = cpufreq_register_notifier(&time_cpufreq_notifier_block,<br />- CPUFREQ_TRANSITION_NOTIFIER);<br />- if (!ret)<br />- cpufreq_init = 1;<br />- return ret;<br />-}<br />-core_initcall(cpufreq_tsc);<br />-<br />-#else /* CONFIG_CPU_FREQ */<br />-static inline void cpufreq_delayed_get(void) { return; }<br />-#endif <br />-<br />-int recalibrate_cpu_khz(void)<br />-{<br />-#ifndef CONFIG_SMP<br />- unsigned int cpu_khz_old = cpu_khz;<br />-<br />- if (cpu_has_tsc) {<br />- init_cpu_khz();<br />- cpu_data[0].loops_per_jiffy =<br />- cpufreq_scale(cpu_data[0].loops_per_jiffy,<br />- cpu_khz_old,<br />- cpu_khz);<br />- return 0;<br />- } else<br />- return -ENODEV;<br />-#else<br />- return -ENODEV;<br />-#endif<br />-}<br />-EXPORT_SYMBOL(recalibrate_cpu_khz);<br /> <br /> static void mark_offset_tsc(void)<br /> {<br />@@ -548,37 +367,6 @@ static int __init init_tsc(char* overrid<br /> return -ENODEV;<br /> }<br /> <br />-static int tsc_resume(void)<br />-{<br />- write_seqlock(&monotonic_lock);<br />- /* Assume this is the last mark offset time */<br />- rdtsc(last_tsc_low, last_tsc_high);<br />-#ifdef CONFIG_HPET_TIMER<br />- if (is_hpet_enabled() && hpet_use_timer)<br />- hpet_last = hpet_readl(HPET_COUNTER);<br />-#endif<br />- write_sequnlock(&monotonic_lock);<br />- return 0;<br />-}<br />-<br />-#ifndef CONFIG_X86_TSC<br />-/* disable flag for tsc. Takes effect by clearing the TSC cpu flag<br />- * in cpu/common.c */<br />-static int __init tsc_setup(char *str)<br />-{<br />- tsc_disable = 1;<br />- return 1;<br />-}<br />-#else<br />-static int __init tsc_setup(char *str)<br />-{<br />- printk(KERN_WARNING "notsc: Kernel compiled with CONFIG_X86_TSC, "<br />- "cannot disable TSC.\n");<br />- return 1;<br />-}<br />-#endif<br />-__setup("notsc", tsc_setup);<br />-<br /> <br /> <br /> /************************************************************/<br />diff --git a/arch/i386/kernel/tsc.c b/arch/i386/kernel/tsc.c<br />new file mode 100644<br />index 0000000..2e94eaf<br />--- /dev/null<br />+++ b/arch/i386/kernel/tsc.c<br />@@ -0,0 +1,312 @@<br />+/*<br />+ * This code largely moved from arch/i386/kernel/timer/timer_tsc.c<br />+ * which was originally moved from arch/i386/kernel/time.c.<br />+ * See comments there for proper credits.<br />+ */<br />+<br />+#include <linux/workqueue.h><br />+#include <linux/cpufreq.h><br />+#include <linux/init.h><br />+<br />+#include <asm/io.h><br />+<br />+#include "mach_timer.h"<br />+<br />+int tsc_disable __initdata = 0;<br />+#ifdef CONFIG_X86_TSC<br />+static int __init tsc_setup(char *str)<br />+{<br />+ printk(KERN_WARNING "notsc: Kernel compiled with CONFIG_X86_TSC, "<br />+ "cannot disable TSC.\n");<br />+ return 1;<br />+}<br />+#else<br />+/*<br />+ * disable flag for tsc. Takes effect by clearing the TSC cpu flag<br />+ * in cpu/common.c<br />+ */<br />+static int __init tsc_setup(char *str)<br />+{<br />+ tsc_disable = 1;<br />+<br />+ return 1;<br />+}<br />+#endif<br />+<br />+__setup("notsc", tsc_setup);<br />+<br />+<br />+int read_current_timer(unsigned long *timer_val)<br />+{<br />+ if (cur_timer->read_timer) {<br />+ *timer_val = cur_timer->read_timer();<br />+ return 0;<br />+ }<br />+ return -1;<br />+}<br />+<br />+<br />+/* convert from cycles(64bits) => nanoseconds (64bits)<br />+ * basic equation:<br />+ * ns = cycles / (freq / ns_per_sec)<br />+ * ns = cycles * (ns_per_sec / freq)<br />+ * ns = cycles * (10^9 / (cpu_khz * 10^3))<br />+ * ns = cycles * (10^6 / cpu_khz)<br />+ *<br />+ * Then we use scaling math (suggested by george@mvista.com) to get:<br />+ * ns = cycles * (10^6 * SC / cpu_khz) / SC<br />+ * ns = cycles * cyc2ns_scale / SC<br />+ *<br />+ * And since SC is a constant power of two, we can convert the div<br />+ * into a shift.<br />+ *<br />+ * We can use khz divisor instead of mhz to keep a better percision, since<br />+ * cyc2ns_scale is limited to 10^6 * 2^10, which fits in 32 bits.<br />+ * (mathieu.desnoyers@polymtl.ca)<br />+ *<br />+ * -johnstul@us.ibm.com "math is hard, lets go shopping!"<br />+ */<br />+static unsigned long cyc2ns_scale;<br />+<br />+#define CYC2NS_SCALE_FACTOR 10 /* 2^10, carefully chosen */<br />+<br />+static inline void set_cyc2ns_scale(unsigned long cpu_khz)<br />+{<br />+ cyc2ns_scale = (1000000 << CYC2NS_SCALE_FACTOR)/cpu_khz;<br />+}<br />+<br />+static inline unsigned long long cycles_2_ns(unsigned long long cyc)<br />+{<br />+ return (cyc * cyc2ns_scale) >> CYC2NS_SCALE_FACTOR;<br />+}<br />+<br />+/*<br />+ * Scheduler clock - returns current time in nanosec units.<br />+ */<br />+unsigned long long sched_clock(void)<br />+{<br />+ unsigned long long this_offset;<br />+<br />+ /*<br />+ * in the NUMA case we dont use the TSC as they are not<br />+ * synchronized across all CPUs.<br />+ */<br />+#ifndef CONFIG_NUMA<br />+ if (!use_tsc)<br />+#endif<br />+ /* no locking but a rare wrong value is not a big deal */<br />+ return jiffies_64 * (1000000000 / HZ);<br />+<br />+ /* read the Time Stamp Counter: */<br />+ rdtscll(this_offset);<br />+<br />+ /* return the value in ns */<br />+ return cycles_2_ns(this_offset);<br />+}<br />+<br />+/* ------ Calibrate the TSC -------<br />+ * Return 2^32 * (1 / (TSC clocks per usec)) for do_fast_gettimeoffset().<br />+ * Too much 64-bit arithmetic here to do this cleanly in C, and for<br />+ * accuracy's sake we want to keep the overhead on the CTC speaker (channel 2)<br />+ * output busy loop as low as possible. We avoid reading the CTC registers<br />+ * directly because of the awkward 8-bit access mechanism of the 82C54<br />+ * device.<br />+ */<br />+<br />+#define CALIBRATE_TIME (5 * 1000020/HZ)<br />+<br />+unsigned long calibrate_tsc(void)<br />+{<br />+ mach_prepare_counter();<br />+<br />+ {<br />+ unsigned long startlow, starthigh;<br />+ unsigned long endlow, endhigh;<br />+ unsigned long count;<br />+<br />+ rdtsc(startlow,starthigh);<br />+ mach_countup(&count);<br />+ rdtsc(endlow,endhigh);<br />+<br />+<br />+ /* Error: ECTCNEVERSET */<br />+ if (count <= 1)<br />+ goto bad_ctc;<br />+<br />+ /* 64-bit subtract - gcc just messes up with long longs */<br />+ __asm__("subl %2,%0\n\t"<br />+ "sbbl %3,%1"<br />+ :"=a" (endlow), "=d" (endhigh)<br />+ :"g" (startlow), "g" (starthigh),<br />+ "0" (endlow), "1" (endhigh));<br />+<br />+ /* Error: ECPUTOOFAST */<br />+ if (endhigh)<br />+ goto bad_ctc;<br />+<br />+ /* Error: ECPUTOOSLOW */<br />+ if (endlow <= CALIBRATE_TIME)<br />+ goto bad_ctc;<br />+<br />+ __asm__("divl %2"<br />+ :"=a" (endlow), "=d" (endhigh)<br />+ :"r" (endlow), "0" (0), "1" (CALIBRATE_TIME));<br />+<br />+ return endlow;<br />+ }<br />+<br />+ /*<br />+ * The CTC wasn't reliable: we got a hit on the very first read,<br />+ * or the CPU was so fast/slow that the quotient wouldn't fit in<br />+ * 32 bits..<br />+ */<br />+bad_ctc:<br />+ return 0;<br />+}<br />+<br />+int recalibrate_cpu_khz(void)<br />+{<br />+#ifndef CONFIG_SMP<br />+ unsigned long cpu_khz_old = cpu_khz;<br />+<br />+ if (cpu_has_tsc) {<br />+ init_cpu_khz();<br />+ cpu_data[0].loops_per_jiffy =<br />+ cpufreq_scale(cpu_data[0].loops_per_jiffy,<br />+ cpu_khz_old,<br />+ cpu_khz);<br />+ return 0;<br />+ } else<br />+ return -ENODEV;<br />+#else<br />+ return -ENODEV;<br />+#endif<br />+}<br />+EXPORT_SYMBOL(recalibrate_cpu_khz);<br />+<br />+<br />+/* calculate cpu_khz */<br />+void init_cpu_khz(void)<br />+{<br />+ if (cpu_has_tsc) {<br />+ unsigned long tsc_quotient = calibrate_tsc();<br />+ if (tsc_quotient) {<br />+ /* report CPU clock rate in Hz.<br />+ * The formula is (10^6 * 2^32) / (2^32 * 1 / (clocks/us)) =<br />+ * clock/second. Our precision is about 100 ppm.<br />+ */<br />+ { unsigned long eax=0, edx=1000;<br />+ __asm__("divl %2"<br />+ :"=a" (cpu_khz), "=d" (edx)<br />+ :"r" (tsc_quotient),<br />+ "0" (eax), "1" (edx));<br />+ printk("Detected %lu.%03lu MHz processor.\n", cpu_khz / 1000, cpu_khz % 1000);<br />+ }<br />+ }<br />+ }<br />+}<br />+<br />+#ifdef CONFIG_CPU_FREQ<br />+<br />+static unsigned int cpufreq_delayed_issched = 0;<br />+static unsigned int cpufreq_init = 0;<br />+static struct work_struct cpufreq_delayed_get_work;<br />+<br />+static void handle_cpufreq_delayed_get(void *v)<br />+{<br />+ unsigned int cpu;<br />+<br />+ for_each_online_cpu(cpu)<br />+ cpufreq_get(cpu);<br />+<br />+ cpufreq_delayed_issched = 0;<br />+}<br />+<br />+/*<br />+ * if we notice lost ticks, schedule a call to cpufreq_get() as it tries<br />+ * to verify the CPU frequency the timing core thinks the CPU is running<br />+ * at is still correct.<br />+ */<br />+void cpufreq_delayed_get(void)<br />+{<br />+ if (cpufreq_init && !cpufreq_delayed_issched) {<br />+ cpufreq_delayed_issched = 1;<br />+ printk(KERN_DEBUG "Losing some ticks... checking if CPU frequency changed.\n");<br />+ schedule_work(&cpufreq_delayed_get_work);<br />+ }<br />+}<br />+<br />+/*<br />+ * if the CPU frequency is scaled, TSC-based delays will need a different<br />+ * loops_per_jiffy value to function properly.<br />+ */<br />+<br />+static unsigned int ref_freq = 0;<br />+static unsigned long loops_per_jiffy_ref = 0;<br />+<br />+#ifndef CONFIG_SMP<br />+static unsigned long fast_gettimeoffset_ref = 0;<br />+static unsigned long cpu_khz_ref = 0;<br />+#endif<br />+<br />+static int<br />+time_cpufreq_notifier(struct notifier_block *nb, unsigned long val, void *data)<br />+{<br />+ struct cpufreq_freqs *freq = data;<br />+<br />+ if (val != CPUFREQ_RESUMECHANGE)<br />+ write_seqlock_irq(&xtime_lock);<br />+<br />+ if (!ref_freq) {<br />+ ref_freq = freq->old;<br />+ loops_per_jiffy_ref = cpu_data[freq->cpu].loops_per_jiffy;<br />+#ifndef CONFIG_SMP<br />+ fast_gettimeoffset_ref = fast_gettimeoffset_quotient;<br />+ cpu_khz_ref = cpu_khz;<br />+#endif<br />+ }<br />+<br />+ if ((val == CPUFREQ_PRECHANGE && freq->old < freq->new) ||<br />+ (val == CPUFREQ_POSTCHANGE && freq->old > freq->new) ||<br />+ (val == CPUFREQ_RESUMECHANGE)) {<br />+ if (!(freq->flags & CPUFREQ_CONST_LOOPS))<br />+ cpu_data[freq->cpu].loops_per_jiffy = cpufreq_scale(loops_per_jiffy_ref, ref_freq, freq->new);<br />+#ifndef CONFIG_SMP<br />+ if (cpu_khz)<br />+ cpu_khz = cpufreq_scale(cpu_khz_ref, ref_freq, freq->new);<br />+ if (use_tsc) {<br />+ if (!(freq->flags & CPUFREQ_CONST_LOOPS)) {<br />+ fast_gettimeoffset_quotient = cpufreq_scale(fast_gettimeoffset_ref, freq->new, ref_freq);<br />+ set_cyc2ns_scale(cpu_khz);<br />+ }<br />+ }<br />+#endif<br />+ }<br />+<br />+ if (val != CPUFREQ_RESUMECHANGE)<br />+ write_sequnlock_irq(&xtime_lock);<br />+<br />+ return 0;<br />+}<br />+<br />+static struct notifier_block time_cpufreq_notifier_block = {<br />+ .notifier_call = time_cpufreq_notifier<br />+};<br />+<br />+static int __init cpufreq_tsc(void)<br />+{<br />+ int ret;<br />+<br />+ INIT_WORK(&cpufreq_delayed_get_work, handle_cpufreq_delayed_get, NULL);<br />+ ret = cpufreq_register_notifier(&time_cpufreq_notifier_block,<br />+ CPUFREQ_TRANSITION_NOTIFIER);<br />+ if (!ret)<br />+ cpufreq_init = 1;<br />+ return ret;<br />+}<br />+<br />+core_initcall(cpufreq_tsc);<br />+<br />+#else /* CONFIG_CPU_FREQ */<br />+void cpufreq_delayed_get(void) { return; }<br />+#endif<br />diff --git a/include/asm-i386/timex.h b/include/asm-i386/timex.h<br />index 292b5a6..ebcc74e 100644<br />--- a/include/asm-i386/timex.h<br />+++ b/include/asm-i386/timex.h<br />@@ -8,6 +8,7 @@<br /> <br /> #include <linux/config.h><br /> #include <asm/processor.h><br />+#include <asm/tsc.h><br /> <br /> #ifdef CONFIG_X86_ELAN<br /> # define CLOCK_TICK_RATE 1189200 /* AMD Elan has different frequency! */<br />@@ -16,39 +17,6 @@<br /> #endif<br /> <br /> <br />-/*<br />- * Standard way to access the cycle counter on i586+ CPUs.<br />- * Currently only used on SMP.<br />- *<br />- * If you really have a SMP machine with i486 chips or older,<br />- * compile for that, and this will just always return zero.<br />- * That's ok, it just means that the nicer scheduling heuristics<br />- * won't work for you.<br />- *<br />- * We only use the low 32 bits, and we'd simply better make sure<br />- * that we reschedule before that wraps. Scheduling at least every<br />- * four billion cycles just basically sounds like a good idea,<br />- * regardless of how fast the machine is. <br />- */<br />-typedef unsigned long long cycles_t;<br />-<br />-static inline cycles_t get_cycles (void)<br />-{<br />- unsigned long long ret=0;<br />-<br />-#ifndef CONFIG_X86_TSC<br />- if (!cpu_has_tsc)<br />- return 0;<br />-#endif<br />-<br />-#if defined(CONFIG_X86_GENERIC) || defined(CONFIG_X86_TSC)<br />- rdtscll(ret);<br />-#endif<br />- return ret;<br />-}<br />-<br />-extern unsigned int cpu_khz;<br />-<br /> extern int read_current_timer(unsigned long *timer_value);<br /> #define ARCH_HAS_READ_CURRENT_TIMER 1<br /> <br />diff --git a/include/asm-i386/tsc.h b/include/asm-i386/tsc.h<br />new file mode 100644<br />index 0000000..86288f2<br />--- /dev/null<br />+++ b/include/asm-i386/tsc.h<br />@@ -0,0 +1,44 @@<br />+/*<br />+ * linux/include/asm-i386/tsc.h<br />+ *<br />+ * i386 TSC related functions<br />+ */<br />+#ifndef _ASM_i386_TSC_H<br />+#define _ASM_i386_TSC_H<br />+<br />+#include <linux/config.h><br />+#include <asm/processor.h><br />+<br />+/*<br />+ * Standard way to access the cycle counter on i586+ CPUs.<br />+ * Currently only used on SMP.<br />+ *<br />+ * If you really have a SMP machine with i486 chips or older,<br />+ * compile for that, and this will just always return zero.<br />+ * That's ok, it just means that the nicer scheduling heuristics<br />+ * won't work for you.<br />+ *<br />+ * We only use the low 32 bits, and we'd simply better make sure<br />+ * that we reschedule before that wraps. Scheduling at least every<br />+ * four billion cycles just basically sounds like a good idea,<br />+ * regardless of how fast the machine is.<br />+ */<br />+typedef unsigned long long cycles_t;<br />+<br />+static inline cycles_t get_cycles (void)<br />+{<br />+ unsigned long long ret=0;<br />+<br />+#ifndef CONFIG_X86_TSC<br />+ if (!cpu_has_tsc)<br />+ return 0;<br />+#endif<br />+<br />+#if defined(CONFIG_X86_GENERIC) || defined(CONFIG_X86_TSC)<br />+ rdtscll(ret);<br />+#endif<br />+ return ret;<br />+}<br />+<br />+extern unsigned int cpu_khz;<br />+#endif<br />-<br />To unsubscribe from this list: send the line "unsubscribe linux-kernel" in<br />the body of a message to majordomo@vger.kernel.org<br />More majordomo info at <a href="http://vger.kernel.org/majordomo-info.html">http://vger.kernel.org/majordomo-info.html</a><br />Please read the FAQ at <a href="http://www.tux.org/lkml/">http://www.tux.org/lkml/</a><br /></pre></td><td width="32" rowspan="2" class="c" valign="top"><img src="/images/icornerr.gif" width="32" height="32" alt="\" /></td></tr><tr><td align="right" valign="bottom"> 聽 </td></tr><tr><td align="right" valign="bottom">聽</td><td class="c" valign="bottom" style="padding-bottom: 0px"><img src="/images/bcornerl.gif" width="32" height="32" alt="\" /></td><td class="c">聽</td><td class="c" valign="bottom" style="padding-bottom: 0px"><img src="/images/bcornerr.gif" width="32" height="32" alt="/" /></td></tr><tr><td align="right" valign="top" colspan="2"> 聽 </td><td class="lm">Last update: 2005-12-16 02:12 聽聽 [from the cache]<br />漏2003-2020 <a href="http://blog.jasper.es/"><span itemprop="editor">Jasper Spaans</span></a>|hosted at <a href="https://www.digitalocean.com/?refcode=9a8e99d24cf9">Digital Ocean</a> and my Meterkast|<a href="http://blog.jasper.es/categories.html#lkml-ref">Read the blog</a></td><td>聽</td></tr></table><script language="javascript" src="/js/styleswitcher.js" type="text/javascript"></script></body></html>