欢迎来到小居数码网-一家分享数码知识,生活小常识的网站,希望可以帮助到您。

当前位置:生活小常识 > 数码知识 >
优质

mysql创建数据库和数据表(在MySQL中创建数据库和表)

数码知识

林炜浩优秀作者

原创内容 来源:小居数码网 时间:2024-08-11 12:51:01 阅读() 收藏:34 分享:41

导读:您正在阅读的是关于【数码知识】的问题,本文由科普作家协会,生活小能手,著名生活达人等整理监督编写。本文有4435个文字,大小约为14KB,预计阅读时间12分钟。

虚构一个微型在线书店的数据库和数据,作为后续MySQL脚本的执行源,方便后续MySQL和SQL的练习。

在MySQL中创建数据库和表

在虚构这个库的过程中,主要涉及的是如何使用命令行管理 MySQL数据库对象:数据库、表、索引、外键等;另一个更为重要的是如何Mock对应表的数据。

虚构书店数据库的dump脚本:Github

数据库(Database)

将要创建的虚拟书店的数据库名为: mysql_practice;

创建数据库的语法:

CREATE DATABASE [IF NOT EXISTS] database_name[CHARACTER SET charset_name][COLLATE collation_name]
  1. IF NOT EXISTS: 可选项,避免数据库已经存在时报错。
  2. CHARACTER SET:可选项,不指定的时候会默认给个。查看当前MySQL Server支持的字符集(character set):show character set; -- 方法1 show charset; -- 方法2 show char set; -- 方法3
  3. COLLATE:针对特定character set比较字符串的规则集合;可选项,不指定的时候会默认给个。获取 charater set的 collationsshow collation like 'utf8%'; collation名字的规则: charater_set_name_ci 或者 charater_set_name_cs 或 charater_set_name_bin;_ci表示不区分大小写,_cs表示区分大小写;_bin表示用编码值比较。
  4. 示例:CREATE DATABASE my_test_tb CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci;

TODO: 关于 character set和collations,内容稍微有点多,后面会单独记一篇文章。

登录的时候选择数据库

mysql -uroot -D database_name -p

登录后选择数据库

use database_name;

查看当前选的数据库

select database();

创建新数据库

create database if not exists mysql_practice;

通过下面的语句可以检查创建的数据库:

show create database mysql_practice;

可以看到,如果创建数据库时候没有指定 character set 和 collate 的话,会默认指定一套。

显示所有当前账户可见的数据库

show databases;

删除数据库

drop database if exists mysql_practice;

MySQL中 schema 是 database 的同义词,因此也可以使用下面语句删除数据库:

drop schema if exists mysql_practice;

数据表(Table)

MySQL创建数据表的语法

CREATE TABLE [IF NOT EXISTS] table_name(   column_1_definition,   column_2_definition,   ...,   table_constraints) ENGINE=storage_engine;

表列的定义语法:

column_name data_type(length) [NOT NULL] [DEFAULT value] [AUTO_INCREMENT] column_constraint;

表的约束(Table Constraints): UNIQUE, CHECK, PRIMARY KEY and FOREIGN KEY.

查看表的定义

desc table_name;

创建mysql_practice数据表

USE mysql_practice;DROP TABLE IF EXISTS customer_order;DROP TABLE IF EXISTS book;DROP TABLE IF EXISTS book_category;DROP TABLE IF EXISTS customer_address;DROP TABLE IF EXISTS customer;DROP TABLE IF EXISTS region;-- region,数据使用: https://github.com/xiangyuecn/AreaCity-JsSpider-StatsGovCREATE TABLE IF NOT EXISTS region(id INT AUTO_INCREMENT,    pid INT NOT NULL,    deep INT NOT NULL,    name VARCHAR(200) NOT NULL,    pinyin_prefix VARCHAR(10) NOT NULL,    pinyin VARCHAR(200) NOT NULL,    ext_id VARCHAR(100) NOT NULL,    ext_name VARCHAR(200) NOT NULL,    PRIMARY KEY(id));-- customerCREATE TABLE IF NOT EXISTS customer(    id INT AUTO_INCREMENT,    no VARCHAR(50) NOT NULL,    first_name VARCHAR(255) NOT NULL,last_name VARCHAR(255) NOT NULL,    status VARCHAR(20) NOT NULL,    phone_number VARCHAR(20)  NULL,    updated_at DATETIME NOT NULL,    created_at DATETIME NOT NULL,    PRIMARY KEY(id),    unique(no)) ENGINE=INNODB;-- customer addressCREATE TABLE IF NOT EXISTS customer_address(id INT AUTO_INCREMENT,    customer_id INT NOT NULL,    area_id INT NULL,    address_detail VARCHAR(200) NULL,is_default bit NOT NULL,updated_at DATETIME NOT NULL,    created_at DATETIME NOT NULL,    PRIMARY KEY(id),    FOREIGN KEY(customer_id) REFERENCES customer (id) ON UPDATE RESTRICT ON DELETE CASCADE) ENGINE=INNODB;-- book categoryCREATE TABLE IF NOT EXISTS book_category(id INT AUTO_INCREMENT,    code VARCHAR(200) NOT NULL,name VARCHAR(200) NOT NULL,    parent_id INT NULL,    deep INT NULL,updated_at DATETIME NOT NULL,    created_at DATETIME NOT NULL,    PRIMARY KEY(id));-- bookCREATE TABLE IF NOT EXISTS book(id INT AUTO_INCREMENT,    category_id INT NOT NULL,    no VARCHAR(50) NOT NULL,    name VARCHAR(200) NOT NULL,    status VARCHAR(50) NOT NULL,    unit_price DOUBLE NOT NULL,    author VARCHAR(50)  NULL,    publish_date DATETIME NULL,    publisher VARCHAR(200) NOT NULL,updated_at DATETIME NOT NULL,    created_at DATETIME NOT NULL,    PRIMARY KEY(id),    FOREIGN KEY (category_id) REFERENCES book_category (id) ON UPDATE RESTRICT ON DELETE CASCADE);-- ordersCREATE TABLE IF NOT EXISTS customer_order(id INT AUTO_INCREMENT,    no VARCHAR(50) NOT NULL,    customer_id INT NOT NULL,    book_id INT NOT NULL,    quantity INT NOT NULL,    total_price DOUBLE NOT NULL,    discount DOUBLE NULL,    order_date DATETIME NOT NULL,updated_at DATETIME NOT NULL,    created_at DATETIME NOT NULL,    PRIMARY KEY(id),    FOREIGN KEY (customer_id) REFERENCES customer(id) ON UPDATE RESTRICT ON DELETE CASCADE,    FOREIGN KEY (book_id) references book (id) on update restrict on delete cascade) ENGINE=INNODB;

导入region数据

下载region csv数据:【三级】省市区 数据下载.

导入语句:

LOAD DATA INFILE '/tmp/ok_data_level3.csv' INTO TABLE region FIELDS TERMINATED BY ',' ENCLOSED BY '"'LINES TERMINATED BY 'n'IGNORE 1 ROWS;

导入如果报错:

ERROR 1290 (HY000): The MySQL server is running with the --secure-file-priv option so it cannot execute this statement
  • 通过命令 mdfind -name my.cnf 找到mysql配置文件 my.cnf;
  • 解决办法 (还没实际测试过,大都使用的是 LOATA DATA LOCAL INFILE 方式)
  • 或者使用 LOAD DATA LOCAL INFILE代替 LOAD DATA INFILE 即:

    LOAD DATA LOCAL INFILE '/tmp/ok_data_level3.csv' INTO TABLE region FIELDS TERMINATED BY ',' ENCLOSED BY '"'LINES TERMINATED BY 'n'IGNORE 1 ROWS;

    如果报错:

    Error Code: 3948. Loading local data is disabled; this must be enabled on both the client and server sides

    或者报错:

    ERROR 1148 (42000): The used command is not allowed with this MySQL version
  • 查看配置: show variables like "local_infile";
  • 修改配置: set global local_infile = 1;
  • 生成Customer数据

    创建一个SP:

    USE mysql_practice;DROP PROCEDURE IF EXISTS sp_generate_customers;DELIMITER $CREATE PROCEDURE sp_generate_customers()BEGIN-- Generate 10000 customer and customer_addressset @fNameIndex = 1;set @lNameIndex = 1;loop_label_f: LOOPIF @fNameIndex > 100 THENLEAVE loop_label_f;END IF;        set @fName = ELT(@fNameIndex, "James","Mary","John","Patricia","Robert","Linda","Michael","Barbara","William","Elizabeth","David","Jennifer","Richard","Maria","Charles","Susan","Joseph","Margaret","Thomas","Dorothy","Christopher","Lisa","Daniel","Nancy","Paul","Karen","Mark","Betty","Donald","Helen","George","Sandra","Kenneth","Donna","Steven","Carol","Edward","Ruth","Brian","Sharon","Ronald","Michelle","Anthony","Laura","Kevin","Sarah","Jason","Kimberly","Matthew","Deborah","Gary","Jessica","Timothy","Shirley","Jose","Cynthia","Larry","Angela","Jeffrey","Melissa","Frank","Brenda","Scott","Amy","Eric","Anna","Stephen","Rebecca","Andrew","Virginia","Raymond","Kathleen","Gregory","Pamela","Joshua","Martha","Jerry","Debra","Dennis","Amanda","Walter","Stephanie","Patrick","Carolyn","Peter","Christine","Harold","Marie","Douglas","Janet","Henry","Catherine","Carl","Frances","Arthur","Ann","Ryan","Joyce","Roger","Diane");        loop_label_last: LOOPIF @lNameIndex > 100 THENLEAVE loop_label_last;END IF;SET @lName =  ELT(@lNameIndex, "Smith","Johnson","Williams","Jones","Brown","Davis","Miller","Wilson","Moore","Taylor","Anderson","Thomas","Jackson","White","Harris","Martin","Thompson","Garcia","Martinez","Robinson","Clark","Rodriguez","Lewis","Lee","Walker","Hall","Allen","Young","Hernandez","King","Wright","Lopez","Hill","Scott","Green","Adams","Baker","Gonzalez","Nelson","Carter","Mitchell","Perez","Roberts","Turner","Phillips","Campbell","Parker","Evans","Edwards","Collins","Stewart","Sanchez","Morris","Rogers","Reed","Cook","Morgan","Bell","Murphy","Bailey","Rivera","Cooper","Richardson","Cox","Howard","Ward","Torres","Peterson","Gray","Ramirez","James","Watson","Brooks","Kelly","Sanders","Price","Bennett","Wood","Barnes","Ross","Henderson","Coleman","Jenkins","Perry","Powell","Long","Patterson","Hughes","Flores","Washington","Butler","Simmons","Foster","Gonzales","Bryant","Alexander","Russell","Griffin","Diaz","Hayes");-- insert into customerINSERT INTO customer(no, first_name, last_name, status, phone_number, updated_at, created_at)             values(REPLACE(LEFT(uuid(), 16), '-', ''),@fName, @lName, 'ACTIVE',null, curdate(), curdate()            );                        -- insert into customer_address            set @randomArea = 0;            SELECT id into @randomArea FROM region where deep = 2 ORDER BY RAND() LIMIT 1;                        INSERT INTO customer_address(customer_id, area_id, address_detail, is_default, updated_at, created_at)            VALUES(@@Identity,                @randomArea,                '',                1,                curdate(),                curdate()            );set @lNameIndex = @lNameIndex + 1;END LOOP loop_label_last;            SET @lNameIndex = 1; -- Note: assign 1 to last name index, for next loop.SET @fnameIndex = @fnameIndex + 1;    END LOOP loop_label_f;-- update address_detail in customer_addressUPDATE customer_address caJOIN region r on ca.area_id = r.id and r.deep = 2join region r2 on r.pid = r2.id and r2.deep = 1join region r3 on r2.pid = r3.id and r3.deep = 0SET ca.address_detail = concat(r3.ext_name, r2.ext_name, r.ext_name);END $DELIMITER ;

    调用SP:

    call sp_generate_customers();

    生成产品分类和产品数据

    第零步: 手动插入产品分类到product_category表中

    INSERT INTO product_category(code,name, parent_id, deep, updated_at, created_at)VALUES('BOOK', 'Book', 0, 0, curdate(), curdate()),('BOOK_CODE', 'Code Book', 1, 1, curdate(), curdate()),('BOOK_CHIDREN', 'Children Book', 1, 1, curdate(), curdate()),('BOOK_SCIENCE', 'Science Book', 1, 1, curdate(), curdate());

    第一步: 用Python写个爬虫工具,抓取书店的商品信息。

    下面是抓取当当搜索“科学”关键字的书籍列表。

    import requestsimport csvfrom bs4 import BeautifulSoupdef crawl(url):    res = requests.get(url)    res.encoding = 'gb18030'    soup = BeautifulSoup(res.text, 'html.parser')    n = 0    section = soup.find('ul', id='component_59')    allLIs = section.find_all('li')    #print(allLIs)    with open('output_science.csv', 'w', encoding='utf8') as f:        csv_writer = csv.writer(f, delimiter='#') # 由于内容里有',',因此这里指定'#'作为csv分隔符        csv_writer.writerow(['序号', '书名', '价格', '作者', '出版时间', '出版社'])                for books in allLIs:            title = books.select('.name')[0].text.strip().split(' ', 1)[0].strip()            price = books.select('.search_pre_price')[0].text.strip('¥')            authorInfo = books.select('.search_book_author')[0].text.strip().split('/')            author = authorInfo[0]            publishDate = authorInfo[1]            publisher = authorInfo[2]            n += 1            csv_writer.writerow([n, title, price, author, publishDate, publisher])url = 'http://search.dangdang.com/?key=%BF%C6%D1%A7&act=input'crawl(url)

    第二步: 导入csv数据到MySQL数据表mock_science中。

    CREATE TABLE `mock_science` (  `id` int(11) NOT NULL,  `name` varchar(200) DEFAULT NULL,  `price` double DEFAULT NULL,  `author` varchar(100) DEFAULT NULL,  `publish_date` varchar(100) DEFAULT NULL,  `publisher` varchar(100) DEFAULT NULL,  PRIMARY KEY (`id`)) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci;

    第三步: 插入科学类书信息到product表中

    INSERT book(category_id, no, name, status,unit_price, author,publish_date,publisher, updated_at, created_at)SELECT 4,REPLACE(LEFT(uuid(), 16), '-', ''),name,'ACTIVE',price,author,publish_date,publisher,curdate(),curdate()FROM mock_science;

    循环第一到第三步,可以插入更多的产品信息。练习数据库最终抓取了JAVA,儿童,科学三个关键搜索出的第一页书籍。

    生成订单数据

    随机生成订单数据的SP(注意:这个sp生成的数据,还需要进一步处理):

    USE mysql_practice;DROP PROCEDURE IF EXISTS sp_generate_orders;DELIMITER $-- Reference: https://www.mysqltutorial.org/select-random-records-database-table.aspx-- Generate orders for last two years.-- each day have orders range: [500, 5000]CREATE PROCEDURE sp_generate_orders()BEGINSET @startDate = '2020-03-01';SET @endDate = curdate();loop_label_p: LOOPIF @startDate > @endDate THENLEAVE loop_label_p;    END IF;SET @randCustomerTotal = FLOOR(RAND()*50) + 100;    SET @randBookTotal = FLOOR(RAND()*5) + 1;        SET @randQty = FLOOR(RAND()*3) + 1;        SET @query1 = CONCAT('INSERT INTO customer_order(no, customer_id, book_id, quantity, total_price,discount, order_date, updated_at, created_at)');    SET @query1 = CONCAT(@query1, ' select ', "'", uuid(), "'",', c.id, p.id,', @randQty, ', 0, 0, ', "'",@startDate,"'", ',', "'",curdate(),"'" ,',', "'",curdate(),"'");    SET @query1 = CONCAT(@query1, ' FROM (select id from customer ORDER BY RAND() LIMIT ', @randCustomerTotal,') c  join ');    SET @query1 = CONCAT(@query1, '  (select id from book order by rand() limit ', @randBookTotal,') p ');    SET @query1 = CONCAT(@query1, 'where c.id is not null');    PREPARE increased FROM @query1;EXECUTE increased;            SET @startDate = DATE_ADD(@startDate,  INTERVAL 1 DAY);END LOOP loop_label_p;END $DELIMITER ;

    总共会生成几十万或上百万条order数据;最好先简单加下index,不然query太慢,可以在创建db table后就加上。

    添加index:

    ALTER TABLE book ADD INDEX idx_unit_price(unit_price);ALTER TABLE customer_order ADD INDEX idx_order_no(no);ALTER TABLE customer_order ADD INDEX idx_order_date(order_date);ALTER TABLE customer_order ADD INDEX idx_quantity(quantity);

    更新order no:

    -- update order total_price-- please note it is better to add index first. otherwise it will be slow.-- update order_noupdate customer_orderset no = concat(REPLACE(LEFT(no, 16), '-', ''), customer_id, book_id)where no is not null;-- update total price

    如果不想有重复的order no,可以通过下面的sql更新order no:

    -- 处理重复的 order noupdate customer_order cojoin(select no from customer_order co2 group by co2.no having count(*) > 1) as cdoon co.no = cdo.noset co.no = concat(REPLACE(LEFT(uuid(), 16), '-', ''), customer_id, book_id);

    如果还有重复的order no,继续run上面这个sql,直到没有重复的即可。

    更新order表里的total_price:

    -- update total priceupdate customer_order cojoin book bon co.book_id = b.idSET co.total_price = co.quantity * b.unit_price;

    至此,我们的数据库表和对应的mock数据已经基本完成。使用mysqldump备份一下:

    mysqldump -u [username] –p[password] [database_name] > [dump_file.sql]

    下一步

  • 视图(View)
  • 存储过程(Store Procedure)
  • 函数(Function)
  • 触发器(Trigger)
  • 定时任务(Job)
  • 上面就是小居数码小编今天给大家介绍的关于(在MySQL中创建数据库和表)的全部内容,希望可以帮助到你,想了解更多关于数码知识的问题,欢迎关注我们,并收藏,转发,分享。

    94%的朋友还想知道的:

    (234)个朋友认为回复得到帮助。

    部分文章信息来源于以及网友投稿,转载请说明出处。

    本文标题:mysql创建数据库和数据表(在MySQL中创建数据库和表):http://sjzlt.cn/shuma/154617.html

    猜你喜欢