posts | images | bookmarks
By
anders pearson
01 Nov 2005
Unicode is a wonderful thing. it is also occasionally the bane of my existance.
Joel Spolsky has a classic article on The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!) that covers the basics nicely. he doesn’t go much into the specifics of dealing with unicode issues in any particular programming language or platform though.
Python does a decent job of making it possible to write applications that are unicode aware. There are some decent pages out there that cover the basics of python and unicode. it’s not very hard. python has two different kinds of internal representations of strings, unicode strings and 8-bit non-unicode strings (basically ASCII). all of python’s built-in functionality and core libraries will work with either just fine. you can mix and match them without having to pay much attention to what kind of string you have. it only gets tricky when python has to deal with an outside system, like I/O, network sockets, or databases. unfortunately, that’s pretty often and the bugs that pop up can be maddening to track down and fix.
the usual scenario is that you build your application and test it and everything works fine. then you release it to the world and the first user who comes along copies and pastes in some text from MS Word with weird “smart” quotes and assorted non-ASCII junk or tries to write in chinese and your precious application chokes and gurgles and starts spitting up arcane UnicodeDecodeError
messages all over the users. then you get to spend some quality time with a pile of tracebacks trying to figure out where in your code (or the code of a library you’re using) something isn’t getting encoded properly. half the time, fixing the bug that cropped up creates another, more subtle unicode related bug somewhere else. just a fun time all around.
i’ve been on a unicode kick lately at work and spent some time experimenting and getting very familiar with the unicode related quirks of the particular technology stack that i prefer to work with at the moment: cherrypy, SQLObject, PostgreSQL, simpleTAL, and textile. here are my notes on how i got them all to play nicely together wrt unicode.
the basic strategy is that application code should try to deal with unicode strings at all times and only encode and decode when talking to the browser or some component that for some reason can’t handle unicode strings. whenever a string is encoded, it should be encoded as UTF8
(if you’re writing applications that would mostly be used by eg, chinese speakers though, you might want to go with UTF16
or UTF32
, but for most of us, UTF8
is all kinds of goodness).
postgresql
postgresql supports unicode out of the box. however, on gentoo at least, it doesn’t encode databases in UTF8 by default, instead using “SQL_ASCII” or something. i didn’t actually test too much to see what went wrong if you didn’t use a UTF8 encoded database. i would assume that kittens get murdered and the baby jesus cries and all sorts of other horrible things happen. anyway, just remember to create databases with:
% createdb -Eunicode mydatabase
and everything should be fine. converting existing databases isn’t very hard either using iconv. just dump it, convert it, drop the database, recreate it with the right encoding and import:
% pg_dump mydatabase > mydatabase_dump.sql
% iconv -f latin1 -t utf8 mydatabase_dump.sql > mydatabase_dump_utf8.sql
% dropdb mydatabase
% createdb -Eunicode mydatabase
% psql mydatabase -f mydatabase_dump_utf8.sql
cherrypy
cherrypy has encoding and decoding filters that make it a cinch to ensure that the application <-> browser boundary converts everything properly. as long as you have:
cherrypy.config.update({'encodingFilter.on' : True,
'encodingFilter.encoding' : 'utf8',
'decodingFilter.on' : True})
in the startup, it should do the right thing. all your output will be encoded as UTF8 when it’s sent to the browser, charsets will be set in the headers, and your application will get all its input as nice unicode strings.
SQLObject
SQLObject has the tough job of playing border patrol with the database. for the most part, it just works. it has a UnicodeCol type that makes most operations smooth. so instead of defining a class like:
class Page(SQLObject):
title = StringCol(length=256)
body = StringCol()
you do:
class Page(SQLObject):
title = UnicodeCol(length=256)
body = UnicodeCol()
and all is well. you can do things like:
>>> p = Page(title=u"\u738b\u83f2",body=u"\u738b\u83f2 is a chinese pop star.")
>>> print p.title.encode('utf8')
unicode goes in, unicode comes out. i did discover a few places though that SQLObject wasn’t happy about getting unicode. eg, doing:
>>> results = list(Page.select(Page.q.title == u"\u738b\u83f2"))
Traceback ... etc. big ugly traceback ending in:
File "/usr/lib/python2.4/site-packages/sqlobject/dbconnection.py", line 295, in _executeRetry
return cursor.execute(query)
TypeError: argument 1 must be str, not unicode
so you do have to be careful to encode your strings before doing a query like that. ie, this works:
>>> results = list(Page.select(Page.q.title == u"\u738b\u83f2".encode('utf8')))
since it’s just a wrapper around the same functionality, you need to use the same care with alternateID columns and Table.byColumnName() queries. so
>>> u = User.byUsername(username)
is out and
>>> u = User.byUsername(username.encode('utf8'))
is in.
similarly, it doesn’t like unicode for the orderBy
parameter:
>>> r = list(Page.select(Page.q.title == "foo", orderBy=u"title"))
gives you another similar error. this only comes up because i frequently do something like:
:::python
# in some cherrypy controller class
@cherrypy.expose
def search(self, q="", order_by="modified"):
r = Page.select(Page.q.title == q, orderBy=order_by)
# ... format the results and send them to the browser
now, using the cherrypy decodingFilter
, which otherwise makes unicode errors disappear, the order_by
that gets sent in from the browser is a unicode string. once again, you’ll need to make sure you encode it as UTF8
.
lastly, EnumCol
‘s don’t get converted automatically:
>>> class Ex(SQLObject):
... foo = EnumCol(enumValues=['a','b','c'])
...
>>> e = Ex(foo=u"a")
will give the usual TypeError
exception. it also appears that you just can’t use unicode in EnumCol
s at all:
>>> class Ex2(SQLObject):
... foo = EnumCol(enumValues=[u"a",u"b",u"c"])
...
>>> Ex2.createTable()
will fail right from the start.
i haven’t really done enough research to determine if those issues are bugs in SQLObject, bugs in the python postgres driver (psycopg), bugs in postgresql, or if there are good reasons to be the way they are or if i’m just doing something obviously foolish. either way, they are easily worked around so it’s not that big a deal.
simpleTAL
the basic pattern for how i use simpleTAL with cherrypy is something like:
def tal_template(filename,values):
from simpletal import simpleTAL, simpleTALES
import cStringIO
context = simpleTALES.Context()
# omitting some stuff i do to set up macros, etc.
# ...
for k in values.keys():
context.addGlobal(k,values[k])
templatefile = open(filename,'r')
template = simpleTAL.compileXMLTemplate(templatefile)
templatefile.close()
f = cStringIO.StringIO()
template.expand(context,f)
return f.getvalue()
this, unfortunately breaks nicely if it comes across any unicode strings in your context. to fix that, you need to specify an outputEncoding on the expand line:
template.expand(context,f,outputEncoding="utf8")
then, since the cherrypy encodingFilter is going to encode all of our output, i change the last line of the function to return a unicode string:
return unicode(f.getvalue(),'utf8')
and it all comes together nicely.
textile
textile, i think tries to be too clever for its own good. unfortunately, if you give it a unicode string with some nice non-ascii characters, you get the dreaded UnicodeDecodeError
when it tries to convert it to ascii internally:
>>> from textile import textile
>>> textile(u"\u201d")
... blah blah blah... UnicodeDecodeError
it fairs slightly better if you give it a utf8 encoded string:
>>> textile(u"\u201d".encode('utf8'))
'<p>&#226;&#128;&#157;</p>'
except that that’s… wrong. rather than spend too much time trying to figure out what textile’s problem was, i reasoned that since it’s purpose in life is just to spit out html, there was no harm in letting python convert the non-ascii characters to XML numerical entities before running it through textile:
>>> textile(u"\u201d".encode('ascii','xmlcharrefreplace'))
'<p>&#8221;</p>'
which is correct.
[update: 2005-11-02] as discussed in the comments of a post on Sam Ruby’s blog, numerical entities are, in general not a very good solution. it’s better than nothing, but ultimately it looks like i or someone else is going to have to fix textile’s unicode support if i really want things done properly.
memcached (bonus!)
once i’d done all this research, it didn’t take me very long to audit one of our applications at work and get fairly confident that it can now handle anything that’s thrown at it (and of course it now has a bunch more unicode related unit tests to make sure it stays that way).
so this evening i decided to do the same audit on the thraxil.org code. going through the above checklist i had it more or less unicode clean in short order. the only thing i missed at first is that the site uses memcached to cache things and memcached doesn’t automatically marshal unicode strings. so a .encode('utf8')
in the set_cache()
and a unicode(value,'utf8')
in the get_cache()
were needed before everything was happy again.
i’m probably missing something, but that’s basically what’s involved in getting a python web application to handle unicode properly. there are some additional shortcuts that i didn’t mention like setting your global default encoding to ‘utf8’ instead of ‘ascii’ but it doesn’t change much, isn’t safe to rely on, and i think it’s useful to understand the details of what’s going on anyway.
for the record, the exact versions i’m using are: Python 2.4, PostgreSQL 8.0.3, cherrypy 2.1, SQLObject 0.7, simpleTAL 3.13, textile 2.0.10, and memcache.py 1.2_tummy5. and psycopg 1.1.15.
By
anders pearson
23 Oct 2005
i’ve been reading <a href=”http://www.amazon.com/exec/obidos/tg/detail/-/0684826305/“>The Golden Bough</a> lately. it’s sort of an exhaustive study of old rituals, myths, and superstitions. the other night i came across this passage in the chapter on rituals for transference of evil or illness to animals (page 631):
A Bohemian cure for fever is to go out into the forest before the sun is up and look for a snipe’s nest. When you have found it, take out one of the young birds and keep it beside you for three days. Then go back into the wood and set the snipe free. The fever will leave you at once. The snipe has taken it away. So in Vedic times the Hindoos of old sent consumption away with a blue jay. They said, “O consumption, fly away, fly away with the blue jay! With the wild rush of the storm and the whirlwind, oh, vanish away!” In the village of Llandegla in Wales there is a church dedicated to the virgin martyr St. Tecla, where the falling sickness is, or used to be, cured by being transferred to a fowl. The patient first washed his limbs in a sacred well hard by, dropped fourpence into it as an offering, walked thrice round the well, and thrice repeated the Lord’s prayer. Then the fowl, which was a cock or a hen according as the patient was a man or a woman, was put into a basket and carried round first the well and afterwards the church. Next the sufferer entered the church and lay down under the communion table till break of day. After that he offered sixpence and departed, leaving the fowl in the church. If the bird died, the sickness was supposed to have been transferred to it from the man or woman, who was now rid of the disorder. As late as 1855 the old parish clerk of the village remembered quite well to have seen the birds staggering about from the effects of the fits which had been transferred to them.
reading that, it occurred to me that with all the avian flu stuff going on now, that the tables have turned. now the birds are transferring the sickness back to us.
By
anders pearson
10 Oct 2005
i’ve been playing with last.fm lately. today, on my profile page, which lists the songs i’ve been listening to recently, there was the following google ad:
“Despair Research Depression at WebMD- Learn about Treatment & Symptoms”
apparently google’s algorithms have decided that i’m not listening to happy enough music.
i found it funny, anyway.
By
anders pearson
08 Oct 2005
i’ve been working down my list of stuff that i broke when moving the site to cherrypy and i think i’ve pretty much got it all fixed. if you find something else broken, let me know.
the old engine had a static publishing approach. when you added a post or a comment, it figured out which pages were affected by the change and wrote out new static copies of those files on disk, which apache could then serve without any intense processing. combined with a somewhat byzantine architecture of server side includes, this was quite scalable. the site could handle a pounding from massive amounts of traffic without really breaking a sweat because most of the time, it was just serving up static content.
with cherrypy now, everything is served dynamically, meaning that every time someone visits the frontpage, a whole bunch of python code is run and a bunch of data is pulled out of the database, processed, run through some templates, and sent out to the browser.
this obviously doesn’t scale as well and you may have noticed that page loads were a little slower than before (although, honestly, not as slow as i was expecting them to be). so, have i lost my mind? why would i purposely make the site slower?
my main reason is that by serving pages dynamically, i could drastically simplify the code. the code for calculating which pages were affected by a given update was a huge percentage of the overall code. it made adding any new features or refactoring a daunting task. if the sheer volume of code weren’t enough, any time i made a change to the engine, all the pages on disk essentially needed to be regenerated. i had a little script for that but with thousands of posts and comments in the database, running it would actually take a few hours. so that was another obstacle in the way of making improvements to the site. the overall result was that i let things kind of stagnate for quite a while. with everything generated dynamically, the code is short and clean and any changes i make are instantly reflected with just a browser refresh.
performance with the new code was definitely not as good, but it was actually decent enough to satisfy me for a few days while i finished fixing everything. knowing that benchmarks are good, i did a couple little quick benchmarks requesting the index page (which is one of the more database intensive pages, and, along with the feeds, one of the most heavily trafficked pages) 100 times, ten concurrent requests (using ab2 -n 100 -c 10
), i found that it could serve .69 requests per second when requested remotely (thus, with a typical network latency) or .9/sec when requested locally (no network latency, so a better picture of how much actual server load is being caused). not great, but also not as bad as i expected. for comparison, apache serving the old static index gave me 6.8/sec (remote) and 28/sec (local). so it was about an order of magnitude slower. not awful, but bad enough that i would need to do something about it.
tonight, once i got everything i could think of fixed, i explored memcached and appreciated its simplicity. it only took me a couple minutes and a couple lines of code to set up memcached caching of the index page, feeds, and user index pages. the result is 6.0/sec (remote) and 85/sec (local), which makes me very happy. the remote requests are clearly limited by the network connection somewhere between my home machine and thraxil.org so there’s nothing i could do to make that any faster. since memcached keeps everything in RAM, it manages to outperform apache serving a file off disk on the local requests. i’ve got a couple more pages that i want to add caching for but i’m resisting the urge to go hogwild caching everything because i know that that’ll get me back to an ugly mess of code to determine which caches need to be expired on a given update.
of course, i’m also mulling over the possibility of writing some code to cache based on a dependency graph and making that into a cherrypy filter. if i could do it right, it wouldn’t get in the way. but that’s low on my list of priorities right now.
depending on whether i feel more like painting or coding this weekend, i may crank out a few items from my ‘new features and enhancements’ list.
By
anders pearson
05 Oct 2005
for a while, i’ve been porting the engine behind this site to cherrypy little by little. tonight i made a big push and got it all running.
i know some things are still broken. give me a day or two to fix them before you complain too much. i also have grand plans for memcached to speed things up…
By
anders pearson
20 Sep 2005
[cross posted from the WaSP to take comments]
Are you test infected? Do you work on dynamic sites and wish there was an automated way to run the output through the W3C validator? Do you wish it was integrated nicely with your unit testing framework?
Scott Raymond has come up with a nice bit of code to add automated validation to the unit tests for a Ruby on Rails application.
If you’re not on Rails, the technique should be pretty straightforward to adapt to your prefered language/framework. Just make a POST
request to http://validator.w3.org/check
sending parameters fragment
(your page, encoded) and output=xml
. Then check the response for a header called x-w3c-validator-status
to see if it says Valid
. If so, your test passed.
By
anders pearson
08 Sep 2005
recently, a friend of mine got a job as the head of the CS department (a one person department so that means he also teaches all the CS classes) in a private high school and is faced with the challenge of redesigning their curriculum. he’s been teaching CS courses at the college level for a few years now but hasn’t taught high school before or designed a full curriculum. he came to Eric and i for ideas.
the school is a typical 4-year high school. CS is an elective there, so one of his goals is to keep things interesting and fun enough that students will actually sign up for the classes.
i don’t have an educational background, but i have a surplus of opinions about how CS should be taught, so i thought about the problem a while and came up with a rough outline of a curriculum myself. i’m posting it here to see what sort of opinions others have about CS education and how programming should be taught.
first of all, i think it’s important to consider the different types of CS students and what they may get out of a program.
at one end, you have the hardcore computer geek kids. they’re already interested in computers and won’t need much encouragement; just let them loose and provide them with as much info as they can take. someone without the ability to learn and explore on their own just won’t make it far in programming. a good program for these kids will just accelerate the process for them. it should also ensure that if they pursue CS in college, that they’ll be well prepared.
at the other end are kids who have no real desire to ever be a programmer or even do anything close to math or hard sciences. i happen to think that learning programming still has a lot of value for them. not just because computers have invaded every aspect of modern life and knowing how to subjugate them to your will is becoming a more general useful skill. science and math are valued parts of a liberal arts education because of mental skills that they are supposed to impart to the student: thinking precisely, logical problem solving, breaking a problem down into smaller, solvable sub-problems, the habit of creating testable hypothesis to understand new phenomenon, etc. programming can offer all that but in a very concrete and direct fashion. having a computer that does exactly what you tell it to can be very humbling and make you realize that quite often you don’t really understand what you’re telling it. i’ve learned from years of programming that unless i can write a program expressing some concept, i probably don’t fully understand it. you just can’t bullshit your way past a compiler.
if they’re lucky, beyond picking up those fancy liberal arts sort of values, they’ll also gain an appreciation for computers as tools to eliminate repetition. that’s largely what they are to me. if you have a computer and you know how to teach it to do things for you, there’s no good reason to ever do a boring or time-consuming task twice. being comfortable getting a computer to automate stuff for you is useful in just about any field.
unfortunately, though, most CS curriculums i’ve seen seem almost purposefully designed to drive students away from programming. starting out with such exciting topics as binary and hexadecimal numbers, fibonacci sequences, and factorials. after learning a bunch of meaningless theory, students are then introduced to the exciting world of writing programs that spit out some text or numbers to a console. probably in a language like Java that requires them to understand object oriented programming, static methods, and classpaths just to make sense of a “hello world” program. the lesson to students is clear: programming is arcane, complicated, and boring.
the overall goal of the curriculum, then, is to cover as much of that spectrum as possible and avoid scaring them off too early. the obvious approach is to take advantage of the fact that only the relatively hardcore kids are going to stick with it for all four years. so the program starts off gently and tries to get in as many of the basic concepts as possible without causing too much pain and eventually progresses to the more nitty-gritty stuff that’s really only going to help the future programmers in the class.
year 1
goals:
don’t scare too many students. cultivate interest. develop understanding of basic computer programming concepts and problem solving techniques. precision, breaking down problems, methodical approach to debugging, etc.
curriculum:
robotics (probably Lego Mindstorms or similar). fun for the students, makes things very hands on, allows for programming to be introduced in a very concrete manner (“hello world” programs that print stuff on a computer screen are boring. making a real physical thing that moves around and does stuff is much more satisfying.). introduce basic concepts of programming as needed: conditionals, loops, variables, subroutines. follow up each with concrete exercises. heavy focus on problem solving aspects, debugging, and incremental design. avoid too much abstraction. possibly spend some time (towards the end) on stuff like Word Macros, spreadsheets, applescript, etc focusing on how they can let you automate away boring, repetitive tasks.
year 2
goals:
develop more advanced skills and understanding. build toolset for developing real applications.
curriculum:
taught in python, scheme, ruby, haskell, etc. something high level with relatively simple syntax, a REPL(Read Eval Print Loop), and built-in simple data structures. emphasis on use of data structures rather than how to build them. basic development process: editor, shell, interpreter, libraries, tools. simple OOP(Object Oriented Programming) concepts. simple algorithms, focusing on graphics rather than math. continued focus on practical scripting and automation; this time just with better, more general tools (python + unix shell rather than Word macros).
year 3
goals:
advanced programming concepts, databases, web, low level programming. prepare for project work.
curriculum:
more data structures and algorithms. introduce lower level programming and hardware understanding. memory, process execution, stacks and function calls. work with a processor simulator for a short unit on basic assembly language (simulate a simpler processor than x86. maybe Z80 or MIPS). move to C and focus on pointers and memory management (should be much more concrete after working in assembly). linked lists and trees and their construction in C using pointers. database concepts and SQL. how to access from python (or ruby, etc.), database schema design. intro to web technologies: HTTP, HTML, CSS, work with a simple web server.
year 4
goals:
project work. get more depth into a particular area. experience building something non-trivial. software engineering, managing complexity, usability, etc.
curriculum:
student driven. most time spent on individual or group projects. students write project proposals, then develop them. occasional lectures on advanced topics related to student projects, but mostly time is spent on projects. perhaps a short unit or two on some other languages like Java (which they’re bound to run into in college, lisp, smalltalk, prolog, etc.)
By
anders pearson
01 Sep 2005
just got back from a week in Buenos Aires with lani and her friend Yura whose been living down there for the last two years. most of the time we spent drinking alcohol, coffee, and yerba mate alternately, occasionally stopping to eat (big, bloody steaks for them, usually something spinach based for me), walk around taking pictures, or go to a russian/argentine hip hop show.
took me ages, but all my pictures are all uploaded, titled, described, and tagged. enjoy. now i must sleep.
By
anders pearson
11 Aug 2005
when my coworker, Dan, woke up this morning, i’m pretty sure that “learning linear algebra” wasn’t on his todo list for the day. linear algebra certainly wasn’t on my mind either when he came to me for help on a problem.
Dan has to do basic faculty support and training for the university’s course management system (which we use, but don’t develop ourselves). when something’s broken, he usually has to be the one to file the bug report with the developers. today he was working on trying to understand a bug in the gradebook so he could give them a useful report of exactly what was going wrong.
the trouble seemed to be that the grades it was calculating weren’t quite right. for the class in question, there were only three grades per student that were supposed to be weighted 30%, 30%, and 40% respectively. Dan had imported the grades into a spreadsheet and created a formula to generate the correct grades and display them next to the incorrect ones that the gradebook was coming up with. we spent a couple minutes staring at it and noticed that there seemed to be a rough correlation between the first column and the average. when the grade in the first column was higher than the average, the gradebook’s result was also higher and vice versa. that gave us the suspicion that the gradebook was weighting the columns incorrectly and, in particular, weighting the first column too heavily.
Dan started plugging in different weights into his formula trying to guess the gradebook’s weights and verify that that was indeed what was going wrong. i watched him do that for a few minutes and realized that it could take him all day to stumble on the right set of weights. if we were right about the weights being off, then it’s just a system of linear equations with three unknowns (the three weights) which should be fairly straightforward to solve with Gaussian Elimination. So the spreadsheet looked like this:
|A |B | C|Gradebook|Correct|
|-----|----|--|--------:|------:|
|78.33|85.3|95| 85.07 | 87.1|
|68.33|80.3|89| 77.45 | 80.1|
|70.00|88.7|86| 79.24 | 82.0|
well, there were a bunch more rows, but three is enough to see what’s going on and actually just enough to solve the system. anyway, that just corresponds to these three equations:
(Wa * 78.33) + (Wb * 85.3) + (Wc * 95) = 85.07
(Wa * 68.33) + (Wb * 80.3) + (Wc * 89) = 77.45
(Wa * 70.00) + (Wb * 88.7) + (Wc * 86) = 79.24
where we need to solve for Wa, Wb, and Wc. in matrix form this is just:
|78.33 85.3 95| |Wa| |85.07|
|68.33 80.3 89| X |Wb| = |77.45|
|70.00 88.7 86| |Wc| |79.24|
Gaussian elimination is just a somewhat mechanical algorithm for taking a system like that and coming out with values for Wa, Wb, and Wc.
i started walking Dan through it on paper but we started getting some bogus looking numbers and i remembered why it was such a pain. the process is pretty mechanical, but it involves a lot of arithmetic and generally just has a lot of places where you can make a mistake and mess the whole thing up. if we’d had a copy of Matlab or even my trusty old TI-85, it would have been no problem.
luckily though, python has some nice linear algebra packages so i went and punched in this code:
from Numeric import *
from LinearAlgebra import solve_linear_equations
a = array([[ 78.33, 85.3 , 95. ],
[ 68.33, 80.3 , 89. ],
[ 70. , 88.7 , 86. ]])
b = array([ 85.07, 77.45, 79.24])
print solve_linear_equations(a,b)
and got the result [0.46242474, 0.23079146, 0.30696587]
, which we plugged back into the spreadsheet and saw that it indeed matched what the gradebook was calculating for all the rows. now, why the gradebook code was using those weights instead of [.30, .30, .40]
we have no idea; but that’s the developers’ problem not ours.
i guess my point is that math shows up in unexpected places so it pays to 1) know enough to recognize it and know how to approach solving a math problem and 2) it helps to know how to get a computer to solve the problem for you. so pay attention in calculus class, kids.
By
anders pearson
28 Jun 2005
the next logical step is apparently zombie dogs.
<p>every day i’m becoming more and more convinced that we’re really living in a movie. i’m not sure yet if it’s sci-fi or horror. </p>
<p>dogs aren’t having all the fun though. scientists have also been <a href="http://www.kuro5hin.org/story/2005/6/20/111815/063">extracting video from cat brains</a>. now we just need to combine the two and extract video from zombie animal brains. or combine everything and pull the video out of cockroach controlled zombie animals.</p>