金子邦彦研究室プログラミングPythonCIFAR 10, CIFAR 100, MNIST, Fashion MNIST データセットの主成分分析プロット(Python, matplotlib, seaborn を使用)

CIFAR 10, CIFAR 100, MNIST, Fashion MNIST データセットの主成分分析プロット(Python, matplotlib, seaborn を使用)

Keras に付属のデータセットの主成分分析を行い,その結果として偉える第1主成分スコアと第2主成分スコアをプロットする.

目次

  1. 前準備
  2. データセットの準備とデータセットの主成分分析プロット

関連する外部ページ

keras に付属のデータセットに関する Web ページ: https://keras.io/ja/datasets/

Google Colaboratory のページ:

次のリンクをクリックすると,Google Colaboratoryノートブックが開く. そして,Google アカウントでログインすると,Google Colaboratory のノートブック内のコード等を編集したり再実行したりができる.編集した場合でも,他の人に影響が出たりということはない.そして,編集後のものを,各自の Google ドライブ内に保存することもできる.

https://colab.research.google.com/drive/1Blm3l62DN_4dqUoltwhq-sdtsfr7ZaiU?usp=sharing

1. 前準備

Python の準備(Windows,Ubuntu 上)

サイト内の関連ページ

関連する外部ページ

Python の公式ページ: https://www.python.org/

TensorFlow,tensorflow_datasets のインストール

Python の numpy, pandas, seaborn, matplotlib, scikit-learn のインストール

2. データセットの準備とデータセットの主成分分析プロット

keras に付属のデータセットに関する Web ページ: https://keras.io/ja/datasets/

主成分分析プロットの前準備

import pandas as pd
import seaborn as sns
sns.set()
import numpy as np
import sklearn.decomposition
%matplotlib inline
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings('ignore')   # Suppress Matplotlib warnings

# 主成分分析
def prin(A, n):
    pca = sklearn.decomposition.PCA(n_components=n)
    return pca.fit_transform(A)

# 主成分分析で2つの成分を得る
def prin2(A):
    return prin(A, 2)

# M の最初の2列を,b で色を付けてプロット
def scatter_plot(M, b, alpha):
    a12 = pd.DataFrame( M[:,0:2], columns=['a1', 'a2'] )
    a12['target'] = b
    sns.scatterplot(x='a1', y='a2', hue='target', data=a12, palette=sns.color_palette("hls", np.max(b) + 1), legend="full", alpha=alpha)

# 主成分分析プロット
def pcaplot(A, b, alpha):
    scatter_plot(prin2(A), b, alpha)

[image]

CIFAR10 データセット

  1. CIFAR10 データセットのロード
    from __future__ import absolute_import, division, print_function, unicode_literals
    import tensorflow as tf
    import numpy as np
    import tensorflow_datasets as tfds
    
    %matplotlib inline
    import matplotlib.pyplot as plt
    import warnings
    warnings.filterwarnings('ignore')   # Suppress Matplotlib warnings
    
    cifar10, cifar10_metadata = tfds.load('cifar10', with_info = True, shuffle_files=True, as_supervised=True, batch_size = -1)
    x_train, y_train, x_test, y_test = cifar10['train'][0], cifar10['train'][1], cifar10['test'][0], cifar10['test'][1]
    print(cifar10_metadata)
    # 【x_train, x_test, y_train, y_test の numpy ndarray への変換と,値の範囲の調整(値の範囲が 0 ~ 255 であるのを,0 ~ 1 に調整)する】
    print(type(x_train), x_train.shape, np.max(x_train), np.min(x_train))
    print(type(x_test), x_test.shape, np.max(x_test), np.min(x_test))
    print(type(y_train), y_train.shape, np.max(y_train), np.min(y_train))
    print(type(y_test), y_test.shape, np.max(y_test), np.min(y_test))
    # numpy に変換
    x_train = x_train.numpy().astype("float32") / 255.0
    x_test = x_test.numpy().astype("float32") / 255.0
    y_train = y_train.numpy()
    y_test = y_test.numpy()
    print(type(x_train), x_train.shape, np.max(x_train), np.min(x_train))
    print(type(x_test), x_test.shape, np.max(x_test), np.min(x_test))
    print(type(y_train), y_train.shape, np.max(y_train), np.min(y_train))
    print(type(y_test), y_test.shape, np.max(y_test), np.min(y_test))
    

    [image]
  2. CIFAR10 データセットの主成分分析プロット
    x_train = x_train.reshape(x_train.shape[0], -1) # サブフラット化
    x_test = x_test.reshape(x_test.shape[0], -1) # サブフラット化
    print(x_train.shape)
    print(x_test.shape)
    pcaplot(np.concatenate( (x_train, x_test) ), np.concatenate( (y_train, y_test) ), 0.1)
    

    [image]

CIFAR 100データセット

  1. CIFAR100 データセットのロード
    from __future__ import absolute_import, division, print_function, unicode_literals
    import tensorflow as tf
    import numpy as np
    import tensorflow_datasets as tfds
    
    %matplotlib inline
    import matplotlib.pyplot as plt
    import warnings
    warnings.filterwarnings('ignore')   # Suppress Matplotlib warnings
    
    cifar100, cifar100_metadata = tfds.load('cifar100', with_info = True, shuffle_files=True, as_supervised=True, batch_size = -1)
    x_train, y_train, x_test, y_test = cifar100['train'][0], cifar100['train'][1], cifar100['test'][0], cifar100['test'][1]
    print(cifar100_metadata)
    # 【x_train, x_test, y_train, y_test の numpy ndarray への変換と,値の範囲の調整(値の範囲が 0 ~ 255 であるのを,0 ~ 1 に調整)する】
    print(type(x_train), x_train.shape, np.max(x_train), np.min(x_train))
    print(type(x_test), x_test.shape, np.max(x_test), np.min(x_test))
    print(type(y_train), y_train.shape, np.max(y_train), np.min(y_train))
    print(type(y_test), y_test.shape, np.max(y_test), np.min(y_test))
    # numpy に変換
    x_train = x_train.numpy().astype("float32") / 255.0
    x_test = x_test.numpy().astype("float32") / 255.0
    y_train = y_train.numpy()
    y_test = y_test.numpy()
    print(type(x_train), x_train.shape, np.max(x_train), np.min(x_train))
    print(type(x_test), x_test.shape, np.max(x_test), np.min(x_test))
    print(type(y_train), y_train.shape, np.max(y_train), np.min(y_train))
    print(type(y_test), y_test.shape, np.max(y_test), np.min(y_test))
    

    [image]
  2. CIFAR100 データセットの主成分分析プロット
    x_train = x_train.reshape(x_train.shape[0], -1) # サブフラット化
    x_test = x_test.reshape(x_test.shape[0], -1) # サブフラット化
    print(x_train.shape)
    print(x_test.shape)
    pcaplot(np.concatenate( (x_train, x_test) ), np.concatenate( (y_train, y_test) ), 0.1)
    

    [image]

MNIST データセットのロード

  1. データセットの準備
    from __future__ import absolute_import, division, print_function, unicode_literals
    import tensorflow as tf
    import numpy as np
    import tensorflow_datasets as tfds
    
    %matplotlib inline
    import matplotlib.pyplot as plt
    import warnings
    warnings.filterwarnings('ignore')   # Suppress Matplotlib warnings
    
    mnist, mnist_metadata = tfds.load('mnist', with_info = True, shuffle_files=True, as_supervised=True, batch_size = -1)
    x_train, y_train, x_test, y_test = mnist['train'][0], mnist['train'][1], mnist['test'][0], mnist['test'][1]
    print(mnist_metadata)
    # 【x_train, x_test, y_train, y_test の numpy ndarray への変換と,値の範囲の調整(値の範囲が 0 ~ 255 であるのを,0 ~ 1 に調整)する】
    print(type(x_train), x_train.shape, np.max(x_train), np.min(x_train))
    print(type(x_test), x_test.shape, np.max(x_test), np.min(x_test))
    print(type(y_train), y_train.shape, np.max(y_train), np.min(y_train))
    print(type(y_test), y_test.shape, np.max(y_test), np.min(y_test))
    # numpy に変換
    x_train = x_train.numpy().astype("float32") / 255.0
    x_test = x_test.numpy().astype("float32") / 255.0
    y_train = y_train.numpy()
    y_test = y_test.numpy()
    print(type(x_train), x_train.shape, np.max(x_train), np.min(x_train))
    print(type(x_test), x_test.shape, np.max(x_test), np.min(x_test))
    print(type(y_train), y_train.shape, np.max(y_train), np.min(y_train))
    print(type(y_test), y_test.shape, np.max(y_test), np.min(y_test))
    

    [image]
  2. MNISTデータセットの主成分分析プロット
    x_train = x_train.reshape(x_train.shape[0], -1) # サブフラット化
    x_test = x_test.reshape(x_test.shape[0], -1) # サブフラット化
    print(x_train.shape)
    print(x_test.shape)
    pcaplot(np.concatenate( (x_train, x_test) ), np.concatenate( (y_train, y_test) ), 0.1)
    

    [image]

Fashion MNIST データセット

  1. Fashion MNIST データセットのロード
    from __future__ import absolute_import, division, print_function, unicode_literals
    import tensorflow as tf
    import numpy as np
    import tensorflow_datasets as tfds
    
    %matplotlib inline
    import matplotlib.pyplot as plt
    import warnings
    warnings.filterwarnings('ignore')   # Suppress Matplotlib warnings
    
    fashion_mnist, fashion_mnist_metadata = tfds.load('fashion_mnist', with_info = True, shuffle_files=True, as_supervised=True, batch_size = -1)
    x_train, y_train, x_test, y_test = fashion_mnist['train'][0], fashion_mnist['train'][1], fashion_mnist['test'][0], fashion_mnist['test'][1]
    print(fashion_mnist_metadata)
    # 【x_train, x_test, y_train, y_test の numpy ndarray への変換と,値の範囲の調整(値の範囲が 0 ~ 255 であるのを,0 ~ 1 に調整)する】
    print(type(x_train), x_train.shape, np.max(x_train), np.min(x_train))
    print(type(x_test), x_test.shape, np.max(x_test), np.min(x_test))
    print(type(y_train), y_train.shape, np.max(y_train), np.min(y_train))
    print(type(y_test), y_test.shape, np.max(y_test), np.min(y_test))
    # numpy に変換
    x_train = x_train.numpy().astype("float32") / 255.0
    x_test = x_test.numpy().astype("float32") / 255.0
    y_train = y_train.numpy()
    y_test = y_test.numpy()
    print(type(x_train), x_train.shape, np.max(x_train), np.min(x_train))
    print(type(x_test), x_test.shape, np.max(x_test), np.min(x_test))
    print(type(y_train), y_train.shape, np.max(y_train), np.min(y_train))
    print(type(y_test), y_test.shape, np.max(y_test), np.min(y_test))
    

    [image]
  2. Fashion MNIST データセットの主成分分析プロット
    x_train = x_train.reshape(x_train.shape[0], -1) # サブフラット化
    x_test = x_test.reshape(x_test.shape[0], -1) # サブフラット化
    print(x_train.shape)
    print(x_test.shape)
    pcaplot(np.concatenate( (x_train, x_test) ), np.concatenate( (y_train, y_test) ), 0.1)
    

    [image]